Jan 27 12:11:45 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 12:11:45 crc restorecon[4683]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 12:11:45 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:46 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 12:11:47 crc restorecon[4683]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 12:11:47 crc restorecon[4683]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 12:11:47 crc kubenswrapper[4745]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 12:11:47 crc kubenswrapper[4745]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 12:11:47 crc kubenswrapper[4745]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 12:11:47 crc kubenswrapper[4745]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 12:11:47 crc kubenswrapper[4745]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 12:11:47 crc kubenswrapper[4745]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.776977 4745 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.784764 4745 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.784799 4745 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.784838 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.784849 4745 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.784858 4745 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.784872 4745 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.784883 4745 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.784894 4745 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.784903 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.784912 4745 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.784922 4745 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.784933 4745 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.784942 4745 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.784951 4745 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.784959 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.784968 4745 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.784976 4745 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.784984 4745 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.784993 4745 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785002 4745 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785010 4745 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785018 4745 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785026 4745 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785035 4745 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785043 4745 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785052 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785060 4745 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785070 4745 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785078 4745 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785088 4745 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785096 4745 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785105 4745 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785114 4745 feature_gate.go:330] unrecognized feature gate: Example Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785123 4745 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785131 4745 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785140 4745 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785148 4745 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785156 4745 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785164 4745 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785172 4745 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785181 4745 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785189 4745 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785198 4745 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785207 4745 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785215 4745 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785224 4745 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785232 4745 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785243 4745 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785252 4745 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785261 4745 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785270 4745 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785279 4745 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785287 4745 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785296 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785308 4745 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785319 4745 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785330 4745 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785339 4745 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785348 4745 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785357 4745 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785366 4745 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785376 4745 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785385 4745 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785395 4745 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785403 4745 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785412 4745 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785420 4745 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785428 4745 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785438 4745 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785448 4745 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.785459 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786523 4745 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786555 4745 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786588 4745 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786620 4745 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786636 4745 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786649 4745 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786670 4745 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786684 4745 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786697 4745 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786710 4745 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786723 4745 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786738 4745 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786750 4745 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786762 4745 flags.go:64] FLAG: --cgroup-root="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786774 4745 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786786 4745 flags.go:64] FLAG: --client-ca-file="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786802 4745 flags.go:64] FLAG: --cloud-config="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786858 4745 flags.go:64] FLAG: --cloud-provider="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786869 4745 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786881 4745 flags.go:64] FLAG: --cluster-domain="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786891 4745 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786901 4745 flags.go:64] FLAG: --config-dir="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786911 4745 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786921 4745 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786933 4745 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786943 4745 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786953 4745 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786964 4745 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786975 4745 flags.go:64] FLAG: --contention-profiling="false" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786984 4745 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.786994 4745 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787005 4745 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787015 4745 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787026 4745 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787036 4745 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787046 4745 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787055 4745 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787067 4745 flags.go:64] FLAG: --enable-server="true" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787077 4745 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787090 4745 flags.go:64] FLAG: --event-burst="100" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787100 4745 flags.go:64] FLAG: --event-qps="50" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787111 4745 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787121 4745 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787131 4745 flags.go:64] FLAG: --eviction-hard="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787143 4745 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787153 4745 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787162 4745 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787173 4745 flags.go:64] FLAG: --eviction-soft="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787183 4745 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787193 4745 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787203 4745 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787213 4745 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787222 4745 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787232 4745 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787242 4745 flags.go:64] FLAG: --feature-gates="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787253 4745 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787263 4745 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787274 4745 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787284 4745 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787294 4745 flags.go:64] FLAG: --healthz-port="10248" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787304 4745 flags.go:64] FLAG: --help="false" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787314 4745 flags.go:64] FLAG: --hostname-override="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787323 4745 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787333 4745 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787345 4745 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787354 4745 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787363 4745 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787373 4745 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787382 4745 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787393 4745 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787402 4745 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787412 4745 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787422 4745 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787431 4745 flags.go:64] FLAG: --kube-reserved="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787442 4745 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787451 4745 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787461 4745 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787471 4745 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787480 4745 flags.go:64] FLAG: --lock-file="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787490 4745 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787501 4745 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787514 4745 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787533 4745 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787549 4745 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787561 4745 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787574 4745 flags.go:64] FLAG: --logging-format="text" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787587 4745 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787602 4745 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787614 4745 flags.go:64] FLAG: --manifest-url="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787625 4745 flags.go:64] FLAG: --manifest-url-header="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787642 4745 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787653 4745 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787665 4745 flags.go:64] FLAG: --max-pods="110" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787675 4745 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787685 4745 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787695 4745 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787705 4745 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787715 4745 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787725 4745 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787734 4745 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787758 4745 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787768 4745 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787778 4745 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787788 4745 flags.go:64] FLAG: --pod-cidr="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787798 4745 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787839 4745 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787849 4745 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787860 4745 flags.go:64] FLAG: --pods-per-core="0" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787870 4745 flags.go:64] FLAG: --port="10250" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787880 4745 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787890 4745 flags.go:64] FLAG: --provider-id="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787900 4745 flags.go:64] FLAG: --qos-reserved="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787909 4745 flags.go:64] FLAG: --read-only-port="10255" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787920 4745 flags.go:64] FLAG: --register-node="true" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787930 4745 flags.go:64] FLAG: --register-schedulable="true" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787940 4745 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787955 4745 flags.go:64] FLAG: --registry-burst="10" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787964 4745 flags.go:64] FLAG: --registry-qps="5" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787974 4745 flags.go:64] FLAG: --reserved-cpus="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787984 4745 flags.go:64] FLAG: --reserved-memory="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.787996 4745 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788006 4745 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788016 4745 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788025 4745 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788036 4745 flags.go:64] FLAG: --runonce="false" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788045 4745 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788056 4745 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788065 4745 flags.go:64] FLAG: --seccomp-default="false" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788075 4745 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788085 4745 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788096 4745 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788106 4745 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788116 4745 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788126 4745 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788136 4745 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788146 4745 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788157 4745 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788167 4745 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788177 4745 flags.go:64] FLAG: --system-cgroups="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788186 4745 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788201 4745 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788211 4745 flags.go:64] FLAG: --tls-cert-file="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788221 4745 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788232 4745 flags.go:64] FLAG: --tls-min-version="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788242 4745 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788252 4745 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788262 4745 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788272 4745 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788282 4745 flags.go:64] FLAG: --v="2" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788294 4745 flags.go:64] FLAG: --version="false" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788306 4745 flags.go:64] FLAG: --vmodule="" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788318 4745 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.788328 4745 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788541 4745 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788552 4745 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788562 4745 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788572 4745 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788582 4745 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788591 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788601 4745 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788610 4745 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788621 4745 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788630 4745 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788642 4745 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788652 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788663 4745 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788672 4745 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788681 4745 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788690 4745 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788699 4745 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788708 4745 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788717 4745 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788726 4745 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788736 4745 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788745 4745 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788756 4745 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788766 4745 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788777 4745 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788786 4745 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788795 4745 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788804 4745 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788837 4745 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788846 4745 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788855 4745 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788863 4745 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788872 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788880 4745 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788889 4745 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788897 4745 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788905 4745 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788914 4745 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788922 4745 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788931 4745 feature_gate.go:330] unrecognized feature gate: Example Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788939 4745 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788958 4745 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788967 4745 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788975 4745 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788984 4745 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.788992 4745 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789001 4745 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789009 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789017 4745 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789026 4745 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789034 4745 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789043 4745 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789051 4745 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789059 4745 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789068 4745 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789077 4745 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789086 4745 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789095 4745 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789106 4745 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789116 4745 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789125 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789134 4745 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789142 4745 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789150 4745 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789159 4745 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789170 4745 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789180 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789190 4745 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789199 4745 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789208 4745 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.789218 4745 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.789231 4745 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.805697 4745 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.805760 4745 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.805965 4745 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.805988 4745 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806000 4745 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806013 4745 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806026 4745 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806037 4745 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806048 4745 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806059 4745 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806070 4745 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806081 4745 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806091 4745 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806103 4745 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806155 4745 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806168 4745 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806180 4745 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806192 4745 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806207 4745 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806220 4745 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806232 4745 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806247 4745 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806266 4745 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806280 4745 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806292 4745 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806304 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806315 4745 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806326 4745 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806337 4745 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806348 4745 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806361 4745 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806372 4745 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806383 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806395 4745 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806406 4745 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806417 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806428 4745 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806440 4745 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806450 4745 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806461 4745 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806472 4745 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806483 4745 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806495 4745 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806506 4745 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806517 4745 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806529 4745 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806539 4745 feature_gate.go:330] unrecognized feature gate: Example Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806551 4745 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806562 4745 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806574 4745 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806585 4745 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806596 4745 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806607 4745 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806623 4745 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806636 4745 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806649 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806661 4745 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806673 4745 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806686 4745 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806700 4745 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806712 4745 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806723 4745 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806734 4745 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806746 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806761 4745 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806775 4745 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806789 4745 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806801 4745 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806859 4745 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806872 4745 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806885 4745 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806896 4745 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.806908 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.806927 4745 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807250 4745 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807274 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807291 4745 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807307 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807322 4745 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807334 4745 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807345 4745 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807356 4745 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807369 4745 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807381 4745 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807392 4745 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807404 4745 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807415 4745 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807426 4745 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807438 4745 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807449 4745 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807460 4745 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807471 4745 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807481 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807493 4745 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807505 4745 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807516 4745 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807527 4745 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807539 4745 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807549 4745 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807561 4745 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807572 4745 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807583 4745 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807593 4745 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807604 4745 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807616 4745 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807627 4745 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807638 4745 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807648 4745 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807661 4745 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807672 4745 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807685 4745 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807700 4745 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807711 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807723 4745 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807734 4745 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807746 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807757 4745 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807768 4745 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807779 4745 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807790 4745 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807805 4745 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807854 4745 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807866 4745 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807880 4745 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807894 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807907 4745 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807920 4745 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807931 4745 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807943 4745 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807955 4745 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807968 4745 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807980 4745 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.807992 4745 feature_gate.go:330] unrecognized feature gate: Example Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.808003 4745 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.808019 4745 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.808034 4745 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.808046 4745 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.808058 4745 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.808070 4745 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.808082 4745 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.808095 4745 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.808107 4745 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.808118 4745 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.808129 4745 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.808140 4745 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.808157 4745 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.812133 4745 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.818733 4745 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.818910 4745 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.821039 4745 server.go:997] "Starting client certificate rotation" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.821152 4745 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.821989 4745 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-05 08:11:19.982891945 +0000 UTC Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.822087 4745 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.861418 4745 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.863075 4745 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 12:11:47 crc kubenswrapper[4745]: E0127 12:11:47.864307 4745 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.233:6443: connect: connection refused" logger="UnhandledError" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.879740 4745 log.go:25] "Validated CRI v1 runtime API" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.940925 4745 log.go:25] "Validated CRI v1 image API" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.942927 4745 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.948352 4745 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-12-06-30-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.948393 4745 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.964430 4745 manager.go:217] Machine: {Timestamp:2026-01-27 12:11:47.961267671 +0000 UTC m=+0.766178379 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:24c1c5dd-133d-4b30-899a-c18b8017a82a BootID:e36e6303-ebda-46d5-bb95-7bd7c6e607a6 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:3e:cc:19 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:3e:cc:19 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:30:8b:c7 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:55:f5:5e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:2f:a3:ae Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:13:d4:f9 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:02:42:c4:f3:9d:e3 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:de:99:eb:9b:94:cb Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.964690 4745 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.964797 4745 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.967193 4745 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.967391 4745 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.967424 4745 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.967610 4745 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.967619 4745 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.968526 4745 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.968555 4745 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.968691 4745 state_mem.go:36] "Initialized new in-memory state store" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.968773 4745 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.976290 4745 kubelet.go:418] "Attempting to sync node with API server" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.976329 4745 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.976383 4745 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.976408 4745 kubelet.go:324] "Adding apiserver pod source" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.976434 4745 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.981354 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.233:6443: connect: connection refused Jan 27 12:11:47 crc kubenswrapper[4745]: W0127 12:11:47.981357 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.233:6443: connect: connection refused Jan 27 12:11:47 crc kubenswrapper[4745]: E0127 12:11:47.981416 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.233:6443: connect: connection refused" logger="UnhandledError" Jan 27 12:11:47 crc kubenswrapper[4745]: E0127 12:11:47.981436 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.233:6443: connect: connection refused" logger="UnhandledError" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.984906 4745 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.987207 4745 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.989146 4745 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.991040 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.991082 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.991096 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.991110 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.991132 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.991146 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.991159 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.991180 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.991195 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.991209 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.991237 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.991251 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.992339 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.992676 4745 server.go:1280] "Started kubelet" Jan 27 12:11:47 crc kubenswrapper[4745]: I0127 12:11:47.992843 4745 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.233:6443: connect: connection refused Jan 27 12:11:47 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:47.993247 4745 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.002030 4745 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.002663 4745 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.006243 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.006281 4745 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 12:11:48 crc kubenswrapper[4745]: E0127 12:11:48.006752 4745 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.006762 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 20:42:21.424009275 +0000 UTC Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.006868 4745 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.006886 4745 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.006933 4745 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 12:11:48 crc kubenswrapper[4745]: E0127 12:11:48.007505 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.233:6443: connect: connection refused" interval="200ms" Jan 27 12:11:48 crc kubenswrapper[4745]: W0127 12:11:48.007620 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.233:6443: connect: connection refused Jan 27 12:11:48 crc kubenswrapper[4745]: E0127 12:11:48.007709 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.233:6443: connect: connection refused" logger="UnhandledError" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.008510 4745 factory.go:55] Registering systemd factory Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.008543 4745 factory.go:221] Registration of the systemd container factory successfully Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.009608 4745 factory.go:153] Registering CRI-O factory Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.009647 4745 factory.go:221] Registration of the crio container factory successfully Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.009743 4745 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.009773 4745 factory.go:103] Registering Raw factory Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.009790 4745 manager.go:1196] Started watching for new ooms in manager Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.011039 4745 manager.go:319] Starting recovery of all containers Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.022228 4745 server.go:460] "Adding debug handlers to kubelet server" Jan 27 12:11:48 crc kubenswrapper[4745]: E0127 12:11:48.021752 4745 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.233:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e9557d04e1510 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 12:11:47.99265512 +0000 UTC m=+0.797565798,LastTimestamp:2026-01-27 12:11:47.99265512 +0000 UTC m=+0.797565798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023558 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023600 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023630 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023640 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023650 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023660 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023670 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023679 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023689 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023697 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023707 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023720 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023729 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023741 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023751 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023760 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023787 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023798 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023819 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023829 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023838 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023849 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023861 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023870 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023908 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023920 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023951 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023963 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023975 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023985 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.023997 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024006 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024218 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024229 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024240 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024250 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024260 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024271 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024283 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024294 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024304 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024313 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024323 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024332 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024344 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024373 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024386 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024398 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024409 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024419 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024428 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024438 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024454 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024465 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024477 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024493 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024503 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024533 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024545 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024555 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024565 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024575 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024585 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024596 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024606 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024615 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024624 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.024636 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.028838 4745 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.028866 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.028879 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.028890 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.028903 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.028914 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.028924 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.028949 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.028961 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.028971 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.028981 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.028994 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029038 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029049 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029060 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029071 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029081 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029090 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029101 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029113 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029124 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029134 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029144 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029154 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029164 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029174 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029186 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029197 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029210 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029222 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029238 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029249 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029258 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029267 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029278 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029288 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029298 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029313 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029323 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029334 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029344 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029354 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029364 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029374 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029386 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029396 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029405 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029415 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029425 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029435 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029446 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029458 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029468 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029478 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029488 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029498 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029507 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029518 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029528 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029536 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029544 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029554 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029563 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029572 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029584 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029594 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029603 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029612 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029620 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029629 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029639 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029648 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029657 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029666 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029676 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029685 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029693 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029704 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029713 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029721 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029730 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029738 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029748 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029757 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029766 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029775 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029855 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029870 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029880 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029889 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029899 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029908 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029919 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029929 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029938 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029948 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029960 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029968 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029978 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029988 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.029997 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030007 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030016 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030026 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030035 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030072 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030081 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030091 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030100 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030108 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030119 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030132 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030144 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030156 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030188 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030200 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030212 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030225 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030238 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030251 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030264 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030275 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030287 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030300 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030311 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030338 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030349 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030363 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030379 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030391 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030404 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030415 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030426 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030438 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030450 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030468 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030480 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030492 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030504 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030514 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030525 4745 reconstruct.go:97] "Volume reconstruction finished" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.030534 4745 reconciler.go:26] "Reconciler: start to sync state" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.038900 4745 manager.go:324] Recovery completed Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.048595 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.050561 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.050605 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.050617 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.051533 4745 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.051578 4745 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.051636 4745 state_mem.go:36] "Initialized new in-memory state store" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.068849 4745 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.070674 4745 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.072395 4745 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.072511 4745 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 12:11:48 crc kubenswrapper[4745]: E0127 12:11:48.072602 4745 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 12:11:48 crc kubenswrapper[4745]: W0127 12:11:48.075778 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.233:6443: connect: connection refused Jan 27 12:11:48 crc kubenswrapper[4745]: E0127 12:11:48.076368 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.233:6443: connect: connection refused" logger="UnhandledError" Jan 27 12:11:48 crc kubenswrapper[4745]: E0127 12:11:48.107755 4745 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.126912 4745 policy_none.go:49] "None policy: Start" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.128106 4745 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.128134 4745 state_mem.go:35] "Initializing new in-memory state store" Jan 27 12:11:48 crc kubenswrapper[4745]: E0127 12:11:48.173108 4745 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 27 12:11:48 crc kubenswrapper[4745]: E0127 12:11:48.208118 4745 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 12:11:48 crc kubenswrapper[4745]: E0127 12:11:48.210359 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.233:6443: connect: connection refused" interval="400ms" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.293169 4745 manager.go:334] "Starting Device Plugin manager" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.293265 4745 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.293285 4745 server.go:79] "Starting device plugin registration server" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.293907 4745 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.293929 4745 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.294156 4745 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.294903 4745 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.294918 4745 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 12:11:48 crc kubenswrapper[4745]: E0127 12:11:48.299857 4745 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.373922 4745 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.374211 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.375914 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.375961 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.375976 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.376117 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.376487 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.376547 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.376926 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.376950 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.376962 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.377065 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.377250 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.377289 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.377647 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.377675 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.377689 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.378570 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.378584 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.378611 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.378625 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.378594 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.378743 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.379057 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.379093 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.379842 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.379866 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.379879 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.380029 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.380996 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.381016 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.381026 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.381121 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.381501 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.381526 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.382519 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.382540 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.382550 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.382685 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.382705 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.383129 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.383144 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.383155 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.384095 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.384115 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.384125 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.395085 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.396226 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.396325 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.396395 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.396469 4745 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 12:11:48 crc kubenswrapper[4745]: E0127 12:11:48.396959 4745 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.233:6443: connect: connection refused" node="crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.436188 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.436431 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.436531 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.436616 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.436741 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.436863 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.436961 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.437049 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.437481 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.437558 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.437632 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.437717 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.437797 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.437941 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.438025 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565182 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565272 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565302 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565339 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565364 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565386 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565405 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565470 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565424 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565508 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565537 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565547 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565574 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565589 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565591 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565602 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565617 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565627 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565654 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565630 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565689 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565684 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565732 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565753 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565790 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565855 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565915 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565939 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565959 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.565992 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.597519 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:48 crc kubenswrapper[4745]: E0127 12:11:48.611544 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.233:6443: connect: connection refused" interval="800ms" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.638746 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.638829 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.638843 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.638881 4745 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 12:11:48 crc kubenswrapper[4745]: E0127 12:11:48.639503 4745 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.233:6443: connect: connection refused" node="crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.700683 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.727227 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.764954 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.787633 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.794874 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 12:11:48 crc kubenswrapper[4745]: W0127 12:11:48.837459 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-451c380884f8436431b6af08dc41cc8c70e6308b54824381dafefd128b56c9fa WatchSource:0}: Error finding container 451c380884f8436431b6af08dc41cc8c70e6308b54824381dafefd128b56c9fa: Status 404 returned error can't find the container with id 451c380884f8436431b6af08dc41cc8c70e6308b54824381dafefd128b56c9fa Jan 27 12:11:48 crc kubenswrapper[4745]: W0127 12:11:48.839874 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-b13c199ea97bc0237846ffe0bc68c3d018585273b969f4f29e75a20dc22a2f45 WatchSource:0}: Error finding container b13c199ea97bc0237846ffe0bc68c3d018585273b969f4f29e75a20dc22a2f45: Status 404 returned error can't find the container with id b13c199ea97bc0237846ffe0bc68c3d018585273b969f4f29e75a20dc22a2f45 Jan 27 12:11:48 crc kubenswrapper[4745]: W0127 12:11:48.842918 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-dc1c1dd447abe8d50e10d18f5cd289478424be54603134e215b07921f3a45681 WatchSource:0}: Error finding container dc1c1dd447abe8d50e10d18f5cd289478424be54603134e215b07921f3a45681: Status 404 returned error can't find the container with id dc1c1dd447abe8d50e10d18f5cd289478424be54603134e215b07921f3a45681 Jan 27 12:11:48 crc kubenswrapper[4745]: W0127 12:11:48.845624 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-93f1b1d1f3b7d2b94676ce9fe49db10d7dcbe2d11a33b5773bdfade41ae1b9d4 WatchSource:0}: Error finding container 93f1b1d1f3b7d2b94676ce9fe49db10d7dcbe2d11a33b5773bdfade41ae1b9d4: Status 404 returned error can't find the container with id 93f1b1d1f3b7d2b94676ce9fe49db10d7dcbe2d11a33b5773bdfade41ae1b9d4 Jan 27 12:11:48 crc kubenswrapper[4745]: W0127 12:11:48.850079 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-6b7558faa3e5100b911a3051b7e11b188ebd1dafa2d25e526604b1db47158a06 WatchSource:0}: Error finding container 6b7558faa3e5100b911a3051b7e11b188ebd1dafa2d25e526604b1db47158a06: Status 404 returned error can't find the container with id 6b7558faa3e5100b911a3051b7e11b188ebd1dafa2d25e526604b1db47158a06 Jan 27 12:11:48 crc kubenswrapper[4745]: I0127 12:11:48.994980 4745 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.233:6443: connect: connection refused Jan 27 12:11:49 crc kubenswrapper[4745]: I0127 12:11:49.007034 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 16:22:12.508441312 +0000 UTC Jan 27 12:11:49 crc kubenswrapper[4745]: I0127 12:11:49.040346 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:49 crc kubenswrapper[4745]: I0127 12:11:49.041706 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:49 crc kubenswrapper[4745]: I0127 12:11:49.041806 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:49 crc kubenswrapper[4745]: I0127 12:11:49.041844 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:49 crc kubenswrapper[4745]: I0127 12:11:49.041874 4745 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 12:11:49 crc kubenswrapper[4745]: E0127 12:11:49.042566 4745 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.233:6443: connect: connection refused" node="crc" Jan 27 12:11:49 crc kubenswrapper[4745]: I0127 12:11:49.077491 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"451c380884f8436431b6af08dc41cc8c70e6308b54824381dafefd128b56c9fa"} Jan 27 12:11:49 crc kubenswrapper[4745]: I0127 12:11:49.078520 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6b7558faa3e5100b911a3051b7e11b188ebd1dafa2d25e526604b1db47158a06"} Jan 27 12:11:49 crc kubenswrapper[4745]: W0127 12:11:49.079483 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.233:6443: connect: connection refused Jan 27 12:11:49 crc kubenswrapper[4745]: E0127 12:11:49.079536 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.233:6443: connect: connection refused" logger="UnhandledError" Jan 27 12:11:49 crc kubenswrapper[4745]: I0127 12:11:49.079546 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"93f1b1d1f3b7d2b94676ce9fe49db10d7dcbe2d11a33b5773bdfade41ae1b9d4"} Jan 27 12:11:49 crc kubenswrapper[4745]: I0127 12:11:49.080310 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dc1c1dd447abe8d50e10d18f5cd289478424be54603134e215b07921f3a45681"} Jan 27 12:11:49 crc kubenswrapper[4745]: I0127 12:11:49.081181 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b13c199ea97bc0237846ffe0bc68c3d018585273b969f4f29e75a20dc22a2f45"} Jan 27 12:11:49 crc kubenswrapper[4745]: W0127 12:11:49.181245 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.233:6443: connect: connection refused Jan 27 12:11:49 crc kubenswrapper[4745]: E0127 12:11:49.181564 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.233:6443: connect: connection refused" logger="UnhandledError" Jan 27 12:11:49 crc kubenswrapper[4745]: W0127 12:11:49.198495 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.233:6443: connect: connection refused Jan 27 12:11:49 crc kubenswrapper[4745]: E0127 12:11:49.198570 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.233:6443: connect: connection refused" logger="UnhandledError" Jan 27 12:11:49 crc kubenswrapper[4745]: W0127 12:11:49.371315 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.233:6443: connect: connection refused Jan 27 12:11:49 crc kubenswrapper[4745]: E0127 12:11:49.371408 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.233:6443: connect: connection refused" logger="UnhandledError" Jan 27 12:11:49 crc kubenswrapper[4745]: E0127 12:11:49.412391 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.233:6443: connect: connection refused" interval="1.6s" Jan 27 12:11:49 crc kubenswrapper[4745]: I0127 12:11:49.843196 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:49 crc kubenswrapper[4745]: I0127 12:11:49.845015 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:49 crc kubenswrapper[4745]: I0127 12:11:49.845047 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:49 crc kubenswrapper[4745]: I0127 12:11:49.845057 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:49 crc kubenswrapper[4745]: I0127 12:11:49.845078 4745 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 12:11:49 crc kubenswrapper[4745]: E0127 12:11:49.845454 4745 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.233:6443: connect: connection refused" node="crc" Jan 27 12:11:49 crc kubenswrapper[4745]: I0127 12:11:49.899617 4745 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 12:11:49 crc kubenswrapper[4745]: E0127 12:11:49.900640 4745 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.233:6443: connect: connection refused" logger="UnhandledError" Jan 27 12:11:49 crc kubenswrapper[4745]: I0127 12:11:49.994451 4745 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.233:6443: connect: connection refused Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.008041 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 01:35:36.410142917 +0000 UTC Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.087395 4745 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f" exitCode=0 Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.087509 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.087536 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f"} Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.088933 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.088982 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.089000 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.091148 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.091284 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3"} Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.092019 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a"} Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.092624 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.092660 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.092670 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.093792 4745 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ca4b44d3d5680dffdfdc1ba3e1b3e6c175a6055bd7bb9837bbb3697c57a7e3df" exitCode=0 Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.093905 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ca4b44d3d5680dffdfdc1ba3e1b3e6c175a6055bd7bb9837bbb3697c57a7e3df"} Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.093970 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.094992 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.095019 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.095037 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.095661 4745 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e47062d82b6186b0018c38cc37381f06f5d1791c4e72b7a48b78b5e29b6b7ed6" exitCode=0 Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.095707 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.095738 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e47062d82b6186b0018c38cc37381f06f5d1791c4e72b7a48b78b5e29b6b7ed6"} Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.096600 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.096638 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.096652 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.097728 4745 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="15cb80506071a3f2c69004a1281de26c5d8e3ab484c430453248f65e3c61b25f" exitCode=0 Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.097768 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"15cb80506071a3f2c69004a1281de26c5d8e3ab484c430453248f65e3c61b25f"} Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.097898 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.099167 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.099208 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.099219 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:50 crc kubenswrapper[4745]: W0127 12:11:50.952640 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.233:6443: connect: connection refused Jan 27 12:11:50 crc kubenswrapper[4745]: E0127 12:11:50.952731 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.233:6443: connect: connection refused" logger="UnhandledError" Jan 27 12:11:50 crc kubenswrapper[4745]: I0127 12:11:50.994001 4745 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.233:6443: connect: connection refused Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.008713 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 15:56:13.636113308 +0000 UTC Jan 27 12:11:51 crc kubenswrapper[4745]: E0127 12:11:51.013234 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.233:6443: connect: connection refused" interval="3.2s" Jan 27 12:11:51 crc kubenswrapper[4745]: W0127 12:11:51.024641 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.233:6443: connect: connection refused Jan 27 12:11:51 crc kubenswrapper[4745]: E0127 12:11:51.024701 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.233:6443: connect: connection refused" logger="UnhandledError" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.231206 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657"} Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.231242 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19"} Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.231251 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466"} Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.235924 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2"} Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.236022 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353"} Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.236119 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.237235 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.237261 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.237272 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.241274 4745 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0553b6cf48d0ed1615aee411fdcebe5139b6d65101e3127311bfb3d9ab41db89" exitCode=0 Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.241450 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0553b6cf48d0ed1615aee411fdcebe5139b6d65101e3127311bfb3d9ab41db89"} Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.241657 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.242920 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.243179 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.243307 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.248379 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e13bcf8dab718f5f304900925344e57f81d51589ee3a9d0be7ccc1a624e43b28"} Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.248480 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.249551 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.249584 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.249594 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.253215 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f1e1b927f3b132dddf8fc7a708f4eaff3596ea17106764af4369e50b0165a373"} Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.253273 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"51afaee5ac37b7ae8bf400d42e8d2d6a3e12687ea9e080d7faa4e09dff5bf758"} Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.253294 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"082334b7a9de2a2744faf97640bf056777c5f2da7cf5ff825121305a40dbf6b0"} Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.253416 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.255293 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.255326 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.255341 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.445583 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.446748 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.446799 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.446853 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.446873 4745 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 12:11:51 crc kubenswrapper[4745]: E0127 12:11:51.447423 4745 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.233:6443: connect: connection refused" node="crc" Jan 27 12:11:51 crc kubenswrapper[4745]: W0127 12:11:51.984655 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.233:6443: connect: connection refused Jan 27 12:11:51 crc kubenswrapper[4745]: E0127 12:11:51.984742 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.233:6443: connect: connection refused" logger="UnhandledError" Jan 27 12:11:51 crc kubenswrapper[4745]: I0127 12:11:51.994490 4745 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.233:6443: connect: connection refused Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.009169 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:42:36.24822834 +0000 UTC Jan 27 12:11:52 crc kubenswrapper[4745]: W0127 12:11:52.078800 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.233:6443: connect: connection refused Jan 27 12:11:52 crc kubenswrapper[4745]: E0127 12:11:52.078906 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.233:6443: connect: connection refused" logger="UnhandledError" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.257417 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"88f6ca9e27818b5f81f8a369818de8426da86cf2a400e157129947b4115fe61f"} Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.257456 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b"} Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.257544 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.258541 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.258564 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.258571 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.260351 4745 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="cff940c021631f4b288dbe84d49651fef2cd303a5145682a32fc92efc1d7f533" exitCode=0 Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.260416 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.265210 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.265329 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.265774 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"cff940c021631f4b288dbe84d49651fef2cd303a5145682a32fc92efc1d7f533"} Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.265852 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.265873 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.265883 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.266178 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.266205 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.266220 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.266922 4745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.266953 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.267408 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.267450 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.267464 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.268329 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.268352 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.268360 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.357461 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.357594 4745 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 27 12:11:52 crc kubenswrapper[4745]: I0127 12:11:52.357702 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 27 12:11:53 crc kubenswrapper[4745]: I0127 12:11:53.009417 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 07:58:01.162239244 +0000 UTC Jan 27 12:11:53 crc kubenswrapper[4745]: I0127 12:11:53.268175 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0ae75b3ed379eb99d8180d1d5c219162091a005417eb4a1a761a55a8e0c0d4a9"} Jan 27 12:11:53 crc kubenswrapper[4745]: I0127 12:11:53.268453 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"465281f942bb784e76c7af2603c96bac1a39846241c312240d13358c76198236"} Jan 27 12:11:53 crc kubenswrapper[4745]: I0127 12:11:53.268468 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"885cf439685b31024e562fdd9fa61ad7ac0c194161f3e73bf4539e9ce01083ed"} Jan 27 12:11:53 crc kubenswrapper[4745]: I0127 12:11:53.268478 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"793dd3d3a9175cc2a3da3abc93e0dabb087d6d3fbad098a4d73dddcecec12b22"} Jan 27 12:11:53 crc kubenswrapper[4745]: I0127 12:11:53.268311 4745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 12:11:53 crc kubenswrapper[4745]: I0127 12:11:53.268526 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:53 crc kubenswrapper[4745]: I0127 12:11:53.269424 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:53 crc kubenswrapper[4745]: I0127 12:11:53.269455 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:53 crc kubenswrapper[4745]: I0127 12:11:53.269466 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:53 crc kubenswrapper[4745]: I0127 12:11:53.885678 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 12:11:53 crc kubenswrapper[4745]: I0127 12:11:53.885987 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:53 crc kubenswrapper[4745]: I0127 12:11:53.887377 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:53 crc kubenswrapper[4745]: I0127 12:11:53.887417 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:53 crc kubenswrapper[4745]: I0127 12:11:53.887434 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:54 crc kubenswrapper[4745]: I0127 12:11:54.010337 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 12:16:14.919652369 +0000 UTC Jan 27 12:11:54 crc kubenswrapper[4745]: I0127 12:11:54.244983 4745 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 12:11:54 crc kubenswrapper[4745]: I0127 12:11:54.276191 4745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 12:11:54 crc kubenswrapper[4745]: I0127 12:11:54.276265 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:54 crc kubenswrapper[4745]: I0127 12:11:54.277176 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:54 crc kubenswrapper[4745]: I0127 12:11:54.277744 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a1bceddf593dfa454e9e3c48f172332fe600acae1b91c0d4d3bcfcd7153b1f72"} Jan 27 12:11:54 crc kubenswrapper[4745]: I0127 12:11:54.278294 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:54 crc kubenswrapper[4745]: I0127 12:11:54.278327 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:54 crc kubenswrapper[4745]: I0127 12:11:54.278345 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:54 crc kubenswrapper[4745]: I0127 12:11:54.279479 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:54 crc kubenswrapper[4745]: I0127 12:11:54.279505 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:54 crc kubenswrapper[4745]: I0127 12:11:54.279523 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:54 crc kubenswrapper[4745]: I0127 12:11:54.648225 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:54 crc kubenswrapper[4745]: I0127 12:11:54.650319 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:54 crc kubenswrapper[4745]: I0127 12:11:54.650384 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:54 crc kubenswrapper[4745]: I0127 12:11:54.650397 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:54 crc kubenswrapper[4745]: I0127 12:11:54.650435 4745 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 12:11:54 crc kubenswrapper[4745]: I0127 12:11:54.854607 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:11:55 crc kubenswrapper[4745]: I0127 12:11:55.010446 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 16:56:56.845995491 +0000 UTC Jan 27 12:11:55 crc kubenswrapper[4745]: I0127 12:11:55.279087 4745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 12:11:55 crc kubenswrapper[4745]: I0127 12:11:55.279112 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:55 crc kubenswrapper[4745]: I0127 12:11:55.279150 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:55 crc kubenswrapper[4745]: I0127 12:11:55.280288 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:55 crc kubenswrapper[4745]: I0127 12:11:55.280327 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:55 crc kubenswrapper[4745]: I0127 12:11:55.280336 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:55 crc kubenswrapper[4745]: I0127 12:11:55.280289 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:55 crc kubenswrapper[4745]: I0127 12:11:55.280400 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:55 crc kubenswrapper[4745]: I0127 12:11:55.280435 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:56 crc kubenswrapper[4745]: I0127 12:11:56.011567 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 05:57:21.134869378 +0000 UTC Jan 27 12:11:56 crc kubenswrapper[4745]: I0127 12:11:56.573156 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 12:11:56 crc kubenswrapper[4745]: I0127 12:11:56.573360 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:56 crc kubenswrapper[4745]: I0127 12:11:56.574516 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:56 crc kubenswrapper[4745]: I0127 12:11:56.574549 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:56 crc kubenswrapper[4745]: I0127 12:11:56.574561 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:57 crc kubenswrapper[4745]: I0127 12:11:57.012141 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 05:49:36.152014063 +0000 UTC Jan 27 12:11:57 crc kubenswrapper[4745]: I0127 12:11:57.327576 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 12:11:57 crc kubenswrapper[4745]: I0127 12:11:57.327767 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:57 crc kubenswrapper[4745]: I0127 12:11:57.329175 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:57 crc kubenswrapper[4745]: I0127 12:11:57.329229 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:57 crc kubenswrapper[4745]: I0127 12:11:57.329267 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:57 crc kubenswrapper[4745]: I0127 12:11:57.394217 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:11:57 crc kubenswrapper[4745]: I0127 12:11:57.394477 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:57 crc kubenswrapper[4745]: I0127 12:11:57.395967 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:57 crc kubenswrapper[4745]: I0127 12:11:57.396075 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:57 crc kubenswrapper[4745]: I0127 12:11:57.396104 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:57 crc kubenswrapper[4745]: I0127 12:11:57.535319 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 12:11:57 crc kubenswrapper[4745]: I0127 12:11:57.535592 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:57 crc kubenswrapper[4745]: I0127 12:11:57.537288 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:57 crc kubenswrapper[4745]: I0127 12:11:57.537338 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:57 crc kubenswrapper[4745]: I0127 12:11:57.537360 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:57 crc kubenswrapper[4745]: I0127 12:11:57.727791 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 12:11:58 crc kubenswrapper[4745]: I0127 12:11:58.012266 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 15:56:08.131115496 +0000 UTC Jan 27 12:11:58 crc kubenswrapper[4745]: I0127 12:11:58.288129 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:58 crc kubenswrapper[4745]: I0127 12:11:58.289399 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:58 crc kubenswrapper[4745]: I0127 12:11:58.289458 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:58 crc kubenswrapper[4745]: I0127 12:11:58.289470 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:11:58 crc kubenswrapper[4745]: E0127 12:11:58.300868 4745 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 12:11:59 crc kubenswrapper[4745]: I0127 12:11:59.013296 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:01:46.034244971 +0000 UTC Jan 27 12:11:59 crc kubenswrapper[4745]: I0127 12:11:59.230649 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 12:11:59 crc kubenswrapper[4745]: I0127 12:11:59.290460 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:11:59 crc kubenswrapper[4745]: I0127 12:11:59.292035 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:11:59 crc kubenswrapper[4745]: I0127 12:11:59.292123 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:11:59 crc kubenswrapper[4745]: I0127 12:11:59.292147 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:00 crc kubenswrapper[4745]: I0127 12:12:00.014562 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 00:29:22.000396326 +0000 UTC Jan 27 12:12:00 crc kubenswrapper[4745]: I0127 12:12:00.328586 4745 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 12:12:00 crc kubenswrapper[4745]: I0127 12:12:00.328681 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 12:12:00 crc kubenswrapper[4745]: I0127 12:12:00.442266 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 12:12:00 crc kubenswrapper[4745]: I0127 12:12:00.442601 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:12:00 crc kubenswrapper[4745]: I0127 12:12:00.444341 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:00 crc kubenswrapper[4745]: I0127 12:12:00.444402 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:00 crc kubenswrapper[4745]: I0127 12:12:00.444441 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:00 crc kubenswrapper[4745]: I0127 12:12:00.453152 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 12:12:01 crc kubenswrapper[4745]: I0127 12:12:01.014718 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 07:34:18.705142367 +0000 UTC Jan 27 12:12:01 crc kubenswrapper[4745]: I0127 12:12:01.296294 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:12:01 crc kubenswrapper[4745]: I0127 12:12:01.297634 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:01 crc kubenswrapper[4745]: I0127 12:12:01.297746 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:01 crc kubenswrapper[4745]: I0127 12:12:01.297797 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:01 crc kubenswrapper[4745]: I0127 12:12:01.304183 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 12:12:02 crc kubenswrapper[4745]: I0127 12:12:02.015751 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 04:03:03.283453747 +0000 UTC Jan 27 12:12:02 crc kubenswrapper[4745]: I0127 12:12:02.298876 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:12:02 crc kubenswrapper[4745]: I0127 12:12:02.300317 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:02 crc kubenswrapper[4745]: I0127 12:12:02.300395 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:02 crc kubenswrapper[4745]: I0127 12:12:02.300422 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:02 crc kubenswrapper[4745]: I0127 12:12:02.995708 4745 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 27 12:12:03 crc kubenswrapper[4745]: I0127 12:12:03.016278 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 21:50:40.538093345 +0000 UTC Jan 27 12:12:03 crc kubenswrapper[4745]: E0127 12:12:03.990045 4745 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.188e9557d04e1510 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 12:11:47.99265512 +0000 UTC m=+0.797565798,LastTimestamp:2026-01-27 12:11:47.99265512 +0000 UTC m=+0.797565798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 12:12:04 crc kubenswrapper[4745]: I0127 12:12:04.017158 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 01:11:56.041251468 +0000 UTC Jan 27 12:12:04 crc kubenswrapper[4745]: E0127 12:12:04.216484 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Jan 27 12:12:04 crc kubenswrapper[4745]: E0127 12:12:04.246902 4745 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 12:12:04 crc kubenswrapper[4745]: I0127 12:12:04.304621 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 12:12:04 crc kubenswrapper[4745]: I0127 12:12:04.306044 4745 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="88f6ca9e27818b5f81f8a369818de8426da86cf2a400e157129947b4115fe61f" exitCode=255 Jan 27 12:12:04 crc kubenswrapper[4745]: I0127 12:12:04.306076 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"88f6ca9e27818b5f81f8a369818de8426da86cf2a400e157129947b4115fe61f"} Jan 27 12:12:04 crc kubenswrapper[4745]: I0127 12:12:04.306190 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:12:04 crc kubenswrapper[4745]: I0127 12:12:04.306921 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:04 crc kubenswrapper[4745]: I0127 12:12:04.306947 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:04 crc kubenswrapper[4745]: I0127 12:12:04.306956 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:04 crc kubenswrapper[4745]: I0127 12:12:04.309168 4745 scope.go:117] "RemoveContainer" containerID="88f6ca9e27818b5f81f8a369818de8426da86cf2a400e157129947b4115fe61f" Jan 27 12:12:04 crc kubenswrapper[4745]: E0127 12:12:04.651787 4745 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 27 12:12:05 crc kubenswrapper[4745]: I0127 12:12:05.017521 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 19:33:51.719130499 +0000 UTC Jan 27 12:12:05 crc kubenswrapper[4745]: I0127 12:12:05.276674 4745 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 12:12:05 crc kubenswrapper[4745]: I0127 12:12:05.276733 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 12:12:05 crc kubenswrapper[4745]: I0127 12:12:05.289039 4745 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 12:12:05 crc kubenswrapper[4745]: I0127 12:12:05.289092 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 12:12:05 crc kubenswrapper[4745]: I0127 12:12:05.310585 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 12:12:05 crc kubenswrapper[4745]: I0127 12:12:05.312712 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e"} Jan 27 12:12:05 crc kubenswrapper[4745]: I0127 12:12:05.312901 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:12:05 crc kubenswrapper[4745]: I0127 12:12:05.313887 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:05 crc kubenswrapper[4745]: I0127 12:12:05.313943 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:05 crc kubenswrapper[4745]: I0127 12:12:05.313962 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:06 crc kubenswrapper[4745]: I0127 12:12:06.018207 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 00:51:56.487816602 +0000 UTC Jan 27 12:12:07 crc kubenswrapper[4745]: I0127 12:12:07.018539 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 01:28:06.004586386 +0000 UTC Jan 27 12:12:07 crc kubenswrapper[4745]: I0127 12:12:07.369166 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:12:07 crc kubenswrapper[4745]: I0127 12:12:07.369973 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:12:07 crc kubenswrapper[4745]: I0127 12:12:07.370202 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:12:07 crc kubenswrapper[4745]: I0127 12:12:07.372222 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:07 crc kubenswrapper[4745]: I0127 12:12:07.372284 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:07 crc kubenswrapper[4745]: I0127 12:12:07.372306 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:07 crc kubenswrapper[4745]: I0127 12:12:07.375327 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:12:07 crc kubenswrapper[4745]: I0127 12:12:07.566573 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 12:12:07 crc kubenswrapper[4745]: I0127 12:12:07.566793 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:12:07 crc kubenswrapper[4745]: I0127 12:12:07.568508 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:07 crc kubenswrapper[4745]: I0127 12:12:07.568662 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:07 crc kubenswrapper[4745]: I0127 12:12:07.568747 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:07 crc kubenswrapper[4745]: I0127 12:12:07.581341 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 12:12:08 crc kubenswrapper[4745]: I0127 12:12:08.018957 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 21:32:58.057113874 +0000 UTC Jan 27 12:12:08 crc kubenswrapper[4745]: E0127 12:12:08.301100 4745 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 12:12:08 crc kubenswrapper[4745]: I0127 12:12:08.324610 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:12:08 crc kubenswrapper[4745]: I0127 12:12:08.324673 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:12:08 crc kubenswrapper[4745]: I0127 12:12:08.327503 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:08 crc kubenswrapper[4745]: I0127 12:12:08.327513 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:08 crc kubenswrapper[4745]: I0127 12:12:08.327541 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:08 crc kubenswrapper[4745]: I0127 12:12:08.327560 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:08 crc kubenswrapper[4745]: I0127 12:12:08.327559 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:08 crc kubenswrapper[4745]: I0127 12:12:08.327709 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:09 crc kubenswrapper[4745]: I0127 12:12:09.020581 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 13:48:49.677100365 +0000 UTC Jan 27 12:12:09 crc kubenswrapper[4745]: I0127 12:12:09.327348 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:12:09 crc kubenswrapper[4745]: I0127 12:12:09.328229 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:09 crc kubenswrapper[4745]: I0127 12:12:09.328296 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:09 crc kubenswrapper[4745]: I0127 12:12:09.328319 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.022196 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 20:35:12.634609961 +0000 UTC Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.280277 4745 trace.go:236] Trace[574441118]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 12:11:55.550) (total time: 14729ms): Jan 27 12:12:10 crc kubenswrapper[4745]: Trace[574441118]: ---"Objects listed" error: 14729ms (12:12:10.280) Jan 27 12:12:10 crc kubenswrapper[4745]: Trace[574441118]: [14.72998947s] [14.72998947s] END Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.280304 4745 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.286782 4745 trace.go:236] Trace[1798191450]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 12:11:56.978) (total time: 13307ms): Jan 27 12:12:10 crc kubenswrapper[4745]: Trace[1798191450]: ---"Objects listed" error: 13307ms (12:12:10.286) Jan 27 12:12:10 crc kubenswrapper[4745]: Trace[1798191450]: [13.307804439s] [13.307804439s] END Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.286823 4745 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.286993 4745 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.287244 4745 trace.go:236] Trace[1217555733]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 12:11:57.403) (total time: 12883ms): Jan 27 12:12:10 crc kubenswrapper[4745]: Trace[1217555733]: ---"Objects listed" error: 12883ms (12:12:10.287) Jan 27 12:12:10 crc kubenswrapper[4745]: Trace[1217555733]: [12.883379339s] [12.883379339s] END Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.287273 4745 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.287781 4745 trace.go:236] Trace[784765116]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 12:11:56.904) (total time: 13383ms): Jan 27 12:12:10 crc kubenswrapper[4745]: Trace[784765116]: ---"Objects listed" error: 13383ms (12:12:10.287) Jan 27 12:12:10 crc kubenswrapper[4745]: Trace[784765116]: [13.383204769s] [13.383204769s] END Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.287803 4745 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.328408 4745 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.328468 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.989122 4745 apiserver.go:52] "Watching apiserver" Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.995680 4745 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.996089 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.996443 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.996506 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:10 crc kubenswrapper[4745]: E0127 12:12:10.996557 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.996622 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.996639 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.996622 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:10 crc kubenswrapper[4745]: E0127 12:12:10.996798 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.996853 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:10 crc kubenswrapper[4745]: E0127 12:12:10.996964 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.997833 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 12:12:10 crc kubenswrapper[4745]: I0127 12:12:10.999121 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.003414 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.003447 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.003456 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.003453 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.003573 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.003606 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.003671 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.008056 4745 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.021383 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.022328 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 23:30:31.214849621 +0000 UTC Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.032823 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.041320 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.048538 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.052201 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.053326 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.053363 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.053372 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.053493 4745 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.057622 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.060518 4745 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.060619 4745 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.061610 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.061642 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.061654 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.061672 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.061683 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:11Z","lastTransitionTime":"2026-01-27T12:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.068541 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.074394 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.077486 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.077538 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.077552 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.077573 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.077586 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:11Z","lastTransitionTime":"2026-01-27T12:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.080399 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.087178 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.090242 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.090274 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.090283 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.090296 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.090305 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:11Z","lastTransitionTime":"2026-01-27T12:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092432 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092476 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092501 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092523 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092542 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092584 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092629 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092651 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092674 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092695 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092719 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092737 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092746 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092799 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092824 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092842 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092882 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092906 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092936 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092958 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092975 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092994 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.092991 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093041 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093056 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093076 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093006 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093096 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093112 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093006 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093048 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093054 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093154 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093171 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093210 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093226 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093235 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093252 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093245 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093316 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093343 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093368 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093390 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093411 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093432 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093452 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093472 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093494 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093515 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093534 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093553 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093577 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093607 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093631 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093652 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093673 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093695 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093718 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093740 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093761 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093784 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093805 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093847 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093875 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093897 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093916 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093937 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093957 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093980 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093257 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093341 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093440 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093473 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093562 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093618 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093694 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093710 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093758 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093797 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093842 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.093928 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094006 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094037 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094141 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094152 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094198 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094249 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094393 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094423 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094501 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094522 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094611 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094638 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094711 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094761 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094873 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094888 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094044 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094939 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094958 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094976 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094992 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.094998 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095009 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095026 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095065 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095083 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095099 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095116 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095131 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095147 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095162 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095178 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095194 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095214 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095236 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095258 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095279 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095295 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095311 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095325 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095322 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095342 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095377 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095394 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095411 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095428 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095443 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095458 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095472 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095486 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095500 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095515 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095529 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095546 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095560 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095574 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095589 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095605 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095619 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095634 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095648 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095665 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095679 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095695 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095709 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095723 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095739 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095754 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095768 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095783 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095797 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095827 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095890 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095908 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095922 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095937 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095953 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095989 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096005 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096021 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096035 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096050 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096065 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096080 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096096 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096111 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096126 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096140 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096155 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096170 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096551 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096567 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096582 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096597 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096612 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096627 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096642 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096657 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096672 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096694 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096708 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096726 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096742 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096757 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096773 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096787 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096802 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096832 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096848 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096880 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096895 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096911 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096926 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096940 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096955 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096970 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096985 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.097002 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.097017 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.097031 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.097046 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.097061 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.097077 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.097092 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.097107 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.097122 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.105600 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.097138 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115340 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115379 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115405 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115427 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115451 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115473 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115492 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115537 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115563 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115589 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115611 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115636 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115654 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115675 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115696 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115720 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115746 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115767 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115785 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115823 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115848 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115868 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115888 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115910 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115932 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115949 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115970 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116023 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116054 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116078 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116100 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116130 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116149 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116170 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116191 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116262 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116329 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116351 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116375 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116407 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116431 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116521 4745 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116540 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116558 4745 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116569 4745 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116588 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116602 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116612 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116622 4745 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116634 4745 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116648 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116659 4745 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116674 4745 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116685 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116846 4745 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095158 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095204 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095563 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095665 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095765 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.095868 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096134 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.096686 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.097028 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.102012 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.102153 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.102433 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.104764 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.105053 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.105250 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.105731 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.107418 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.107662 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.107904 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.109776 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.110505 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.111460 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.112613 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.112868 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.113659 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.114143 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.114793 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115484 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115535 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.115780 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116129 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116218 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116350 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116481 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116710 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.116742 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:12:11.616719305 +0000 UTC m=+24.421629993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.116763 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.117430 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.117931 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.118217 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.118786 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.119199 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.119431 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.119656 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.119689 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.119777 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.120238 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.120003 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.120592 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.120697 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.120894 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.121387 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.121598 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.121674 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.121729 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.121864 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.122065 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.122744 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.123535 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.123553 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.123880 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.124223 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.124608 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.125219 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.125630 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.125709 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.126467 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.126491 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.126633 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.126922 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.127095 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.127161 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.127297 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.127446 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.131127 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.131171 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.131202 4745 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.131215 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.131231 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.131243 4745 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.131263 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.131279 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.131293 4745 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.131314 4745 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.127458 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.127483 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.127763 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.132124 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.132440 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.132722 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.132869 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.133016 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.133125 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.133236 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.133307 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.133334 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.133590 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.133697 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.134066 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.134092 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.134258 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.134382 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.134479 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.134571 4745 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.134746 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:11.634723568 +0000 UTC m=+24.439634346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.135045 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.135486 4745 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.135733 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.135752 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:11.635546001 +0000 UTC m=+24.440456689 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.135880 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.136131 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.136162 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.136284 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.136288 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.136557 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.136781 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.136890 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.136909 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.137205 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.137272 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.136032 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.137992 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.138013 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.138055 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.138059 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.138099 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.138116 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.138125 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:11Z","lastTransitionTime":"2026-01-27T12:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.138665 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.138786 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.138964 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.139034 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.139073 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.139378 4745 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.139411 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.139426 4745 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.139440 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.139450 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.139459 4745 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.139469 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.139478 4745 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.139489 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.139500 4745 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.139517 4745 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.139529 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.139541 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.139553 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.139563 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.139558 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.139846 4745 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.141291 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.141687 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.141785 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.142065 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.153479 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.142141 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.142196 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.143066 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.143315 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.143370 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.144103 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.144989 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.146696 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.146751 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.146892 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.148061 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.148936 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.149144 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.149294 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.150123 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.152070 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.152077 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.152215 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.152592 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.149361 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.153876 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.155055 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.156038 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.156278 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.159800 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.159849 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.159863 4745 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.159927 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:11.659905513 +0000 UTC m=+24.464816271 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.160339 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.160376 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.160424 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.160714 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.160787 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.160941 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.161282 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.161382 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.161400 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.161486 4745 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.161542 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:11.661529848 +0000 UTC m=+24.466440606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.162997 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.163804 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.164463 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.164635 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.160206 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.172201 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.175623 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.175915 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.175964 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.176213 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.177186 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.177998 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.184227 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.184263 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.184273 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.184288 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.184297 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:11Z","lastTransitionTime":"2026-01-27T12:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.184499 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.188965 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.195549 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.195685 4745 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.197221 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.197247 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.197254 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.197268 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.197276 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:11Z","lastTransitionTime":"2026-01-27T12:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240333 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240398 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240454 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240463 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240472 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240480 4745 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240489 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240498 4745 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240506 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240514 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240522 4745 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240529 4745 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240538 4745 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240545 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240553 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240560 4745 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240569 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240577 4745 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240585 4745 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240595 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240603 4745 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240638 4745 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240649 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240658 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240666 4745 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240674 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240682 4745 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240691 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240700 4745 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240709 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240717 4745 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240726 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240734 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240743 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240752 4745 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240767 4745 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240775 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240783 4745 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240791 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240799 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240807 4745 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240845 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240853 4745 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240862 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240870 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240879 4745 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240887 4745 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240897 4745 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240906 4745 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240916 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240925 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240934 4745 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240943 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240952 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240960 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240968 4745 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240976 4745 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240986 4745 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.240995 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241004 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241013 4745 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241021 4745 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241029 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241039 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241047 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241055 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241063 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241071 4745 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241079 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241087 4745 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241099 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241107 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241116 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241124 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241133 4745 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241141 4745 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241150 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241159 4745 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241167 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241178 4745 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241190 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241197 4745 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241206 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241215 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241223 4745 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241232 4745 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241247 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241256 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241265 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241274 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241283 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241306 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241314 4745 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241323 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241332 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241341 4745 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241350 4745 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241358 4745 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241366 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241374 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241383 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241391 4745 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241400 4745 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241408 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241417 4745 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241425 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241432 4745 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241440 4745 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241448 4745 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241455 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241463 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241471 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241479 4745 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241491 4745 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241499 4745 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241507 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241515 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241525 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241534 4745 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241542 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241550 4745 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241560 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241568 4745 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241582 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241590 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241597 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241605 4745 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241612 4745 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241620 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241628 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241636 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241644 4745 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241655 4745 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241663 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241680 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241688 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241697 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241704 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241712 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241720 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241729 4745 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241736 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241745 4745 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241753 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241761 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241770 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241777 4745 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241789 4745 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241798 4745 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241810 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241833 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241841 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241850 4745 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241862 4745 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241870 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241877 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241885 4745 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241931 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.241978 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.299675 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.299742 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.299755 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.299769 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.299779 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:11Z","lastTransitionTime":"2026-01-27T12:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.313330 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.319737 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.326951 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.335684 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0dac50bb8508d89a36bb51f07c0710d7392d9ac1ddfed3a6de4bc2b27916b50b"} Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.336976 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"df24bfcf5d8c2166aaaf6a758302deaec284372516b08e5cecd4053b9a274bd6"} Jan 27 12:12:11 crc kubenswrapper[4745]: W0127 12:12:11.340381 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-999fad38ca32bcbc81a8b44ab8feb8bc34600fb703808c1d3e62d92a6be658a8 WatchSource:0}: Error finding container 999fad38ca32bcbc81a8b44ab8feb8bc34600fb703808c1d3e62d92a6be658a8: Status 404 returned error can't find the container with id 999fad38ca32bcbc81a8b44ab8feb8bc34600fb703808c1d3e62d92a6be658a8 Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.402223 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.402257 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.402267 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.402281 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.402291 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:11Z","lastTransitionTime":"2026-01-27T12:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.505316 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.505389 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.505402 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.505419 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.505430 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:11Z","lastTransitionTime":"2026-01-27T12:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.608512 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.608552 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.608561 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.608574 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.608585 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:11Z","lastTransitionTime":"2026-01-27T12:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.644909 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.644983 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.645012 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.645090 4745 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.645139 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:12.645125824 +0000 UTC m=+25.450036512 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.645191 4745 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.645266 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:12:12.645203076 +0000 UTC m=+25.450113764 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.645333 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:12.6453231 +0000 UTC m=+25.450234048 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.711575 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.711630 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.711648 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.711671 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.711685 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:11Z","lastTransitionTime":"2026-01-27T12:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.746320 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.746393 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.746530 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.746550 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.746545 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.746589 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.746601 4745 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.746564 4745 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.746657 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:12.746641694 +0000 UTC m=+25.551552382 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:11 crc kubenswrapper[4745]: E0127 12:12:11.746717 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:12.746705256 +0000 UTC m=+25.551615944 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.814549 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.814594 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.814603 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.814617 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.814628 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:11Z","lastTransitionTime":"2026-01-27T12:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.861826 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-97hlh"] Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.862131 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-4x9px"] Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.862407 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4x9px" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.862720 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-97hlh" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.863287 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-gc8mv"] Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.863672 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-gfzkp"] Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.863781 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.863945 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.867352 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.868179 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.868321 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.868848 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.868892 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.868930 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.868968 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.869061 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.869195 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.869448 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.869506 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.870064 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.870607 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.876189 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.876266 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.882101 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.892185 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.899780 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.909891 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.916254 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.916304 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.916319 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.916338 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.916351 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:11Z","lastTransitionTime":"2026-01-27T12:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.920674 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.929654 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.941118 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948096 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c438e876-f4c1-42ca-b935-b5e58be9cfb2-cni-binary-copy\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948136 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/49a22b36-6ae4-4887-b364-7d1ac21ff625-proxy-tls\") pod \"machine-config-daemon-gfzkp\" (UID: \"49a22b36-6ae4-4887-b364-7d1ac21ff625\") " pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948169 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gc8mv\" (UID: \"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\") " pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948199 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-host-var-lib-cni-multus\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948218 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-multus-socket-dir-parent\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948237 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed-os-release\") pod \"multus-additional-cni-plugins-gc8mv\" (UID: \"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\") " pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948339 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-cnibin\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948383 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-host-run-netns\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948413 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-multus-conf-dir\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948442 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-host-run-multus-certs\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948472 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf5tf\" (UniqueName: \"kubernetes.io/projected/c438e876-f4c1-42ca-b935-b5e58be9cfb2-kube-api-access-pf5tf\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948502 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed-cni-binary-copy\") pod \"multus-additional-cni-plugins-gc8mv\" (UID: \"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\") " pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948534 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gc8mv\" (UID: \"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\") " pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948565 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed-system-cni-dir\") pod \"multus-additional-cni-plugins-gc8mv\" (UID: \"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\") " pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948594 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzt5n\" (UniqueName: \"kubernetes.io/projected/c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed-kube-api-access-rzt5n\") pod \"multus-additional-cni-plugins-gc8mv\" (UID: \"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\") " pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948646 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-host-var-lib-cni-bin\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948687 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-etc-kubernetes\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948710 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrz6x\" (UniqueName: \"kubernetes.io/projected/98fd8161-ba85-49ff-bbae-48dd3925f0e1-kube-api-access-wrz6x\") pod \"node-resolver-4x9px\" (UID: \"98fd8161-ba85-49ff-bbae-48dd3925f0e1\") " pod="openshift-dns/node-resolver-4x9px" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948740 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c438e876-f4c1-42ca-b935-b5e58be9cfb2-multus-daemon-config\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948769 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/49a22b36-6ae4-4887-b364-7d1ac21ff625-mcd-auth-proxy-config\") pod \"machine-config-daemon-gfzkp\" (UID: \"49a22b36-6ae4-4887-b364-7d1ac21ff625\") " pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948792 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed-cnibin\") pod \"multus-additional-cni-plugins-gc8mv\" (UID: \"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\") " pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948848 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-system-cni-dir\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948890 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-os-release\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948951 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-host-var-lib-kubelet\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.948989 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/49a22b36-6ae4-4887-b364-7d1ac21ff625-rootfs\") pod \"machine-config-daemon-gfzkp\" (UID: \"49a22b36-6ae4-4887-b364-7d1ac21ff625\") " pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.949026 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltjnb\" (UniqueName: \"kubernetes.io/projected/49a22b36-6ae4-4887-b364-7d1ac21ff625-kube-api-access-ltjnb\") pod \"machine-config-daemon-gfzkp\" (UID: \"49a22b36-6ae4-4887-b364-7d1ac21ff625\") " pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.949066 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-host-run-k8s-cni-cncf-io\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.949102 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-hostroot\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.949135 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/98fd8161-ba85-49ff-bbae-48dd3925f0e1-hosts-file\") pod \"node-resolver-4x9px\" (UID: \"98fd8161-ba85-49ff-bbae-48dd3925f0e1\") " pod="openshift-dns/node-resolver-4x9px" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.949181 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-multus-cni-dir\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.951251 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.978280 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:11 crc kubenswrapper[4745]: I0127 12:12:11.994845 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.008218 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.018336 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.018378 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.018389 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.018404 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.018415 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:12Z","lastTransitionTime":"2026-01-27T12:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.022643 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 17:25:58.026405004 +0000 UTC Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.043797 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050159 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-hostroot\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050192 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/98fd8161-ba85-49ff-bbae-48dd3925f0e1-hosts-file\") pod \"node-resolver-4x9px\" (UID: \"98fd8161-ba85-49ff-bbae-48dd3925f0e1\") " pod="openshift-dns/node-resolver-4x9px" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050209 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-multus-cni-dir\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050223 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c438e876-f4c1-42ca-b935-b5e58be9cfb2-cni-binary-copy\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050240 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/49a22b36-6ae4-4887-b364-7d1ac21ff625-proxy-tls\") pod \"machine-config-daemon-gfzkp\" (UID: \"49a22b36-6ae4-4887-b364-7d1ac21ff625\") " pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050280 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gc8mv\" (UID: \"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\") " pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050311 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-host-var-lib-cni-multus\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050334 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-multus-socket-dir-parent\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050348 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed-os-release\") pod \"multus-additional-cni-plugins-gc8mv\" (UID: \"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\") " pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050345 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-hostroot\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050375 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-cnibin\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050389 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-host-run-netns\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050402 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-multus-conf-dir\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050416 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-host-run-multus-certs\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050421 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-host-var-lib-cni-multus\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050430 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf5tf\" (UniqueName: \"kubernetes.io/projected/c438e876-f4c1-42ca-b935-b5e58be9cfb2-kube-api-access-pf5tf\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050468 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed-cni-binary-copy\") pod \"multus-additional-cni-plugins-gc8mv\" (UID: \"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\") " pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050491 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gc8mv\" (UID: \"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\") " pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050521 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed-system-cni-dir\") pod \"multus-additional-cni-plugins-gc8mv\" (UID: \"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\") " pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050541 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzt5n\" (UniqueName: \"kubernetes.io/projected/c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed-kube-api-access-rzt5n\") pod \"multus-additional-cni-plugins-gc8mv\" (UID: \"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\") " pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050565 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-host-var-lib-cni-bin\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050582 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-etc-kubernetes\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050599 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrz6x\" (UniqueName: \"kubernetes.io/projected/98fd8161-ba85-49ff-bbae-48dd3925f0e1-kube-api-access-wrz6x\") pod \"node-resolver-4x9px\" (UID: \"98fd8161-ba85-49ff-bbae-48dd3925f0e1\") " pod="openshift-dns/node-resolver-4x9px" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050618 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c438e876-f4c1-42ca-b935-b5e58be9cfb2-multus-daemon-config\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050638 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/49a22b36-6ae4-4887-b364-7d1ac21ff625-mcd-auth-proxy-config\") pod \"machine-config-daemon-gfzkp\" (UID: \"49a22b36-6ae4-4887-b364-7d1ac21ff625\") " pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050657 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed-cnibin\") pod \"multus-additional-cni-plugins-gc8mv\" (UID: \"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\") " pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050678 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-system-cni-dir\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050698 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-multus-socket-dir-parent\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050716 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-os-release\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050738 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-host-var-lib-kubelet\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050750 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed-os-release\") pod \"multus-additional-cni-plugins-gc8mv\" (UID: \"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\") " pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050757 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/49a22b36-6ae4-4887-b364-7d1ac21ff625-rootfs\") pod \"machine-config-daemon-gfzkp\" (UID: \"49a22b36-6ae4-4887-b364-7d1ac21ff625\") " pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050780 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltjnb\" (UniqueName: \"kubernetes.io/projected/49a22b36-6ae4-4887-b364-7d1ac21ff625-kube-api-access-ltjnb\") pod \"machine-config-daemon-gfzkp\" (UID: \"49a22b36-6ae4-4887-b364-7d1ac21ff625\") " pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050783 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-cnibin\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050840 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-host-run-k8s-cni-cncf-io\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050852 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-host-run-netns\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050908 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed-system-cni-dir\") pod \"multus-additional-cni-plugins-gc8mv\" (UID: \"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\") " pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.051034 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-host-var-lib-cni-bin\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.051064 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-etc-kubernetes\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.050412 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-multus-cni-dir\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.051467 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gc8mv\" (UID: \"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\") " pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.051515 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-host-run-multus-certs\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.051553 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-host-run-k8s-cni-cncf-io\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.051586 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-multus-conf-dir\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.051656 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c438e876-f4c1-42ca-b935-b5e58be9cfb2-multus-daemon-config\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.051754 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed-cni-binary-copy\") pod \"multus-additional-cni-plugins-gc8mv\" (UID: \"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\") " pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.051802 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-os-release\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.051840 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed-cnibin\") pod \"multus-additional-cni-plugins-gc8mv\" (UID: \"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\") " pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.051865 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-system-cni-dir\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.051886 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c438e876-f4c1-42ca-b935-b5e58be9cfb2-host-var-lib-kubelet\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.051909 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/49a22b36-6ae4-4887-b364-7d1ac21ff625-rootfs\") pod \"machine-config-daemon-gfzkp\" (UID: \"49a22b36-6ae4-4887-b364-7d1ac21ff625\") " pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.051908 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/49a22b36-6ae4-4887-b364-7d1ac21ff625-mcd-auth-proxy-config\") pod \"machine-config-daemon-gfzkp\" (UID: \"49a22b36-6ae4-4887-b364-7d1ac21ff625\") " pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.052214 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gc8mv\" (UID: \"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\") " pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.052285 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c438e876-f4c1-42ca-b935-b5e58be9cfb2-cni-binary-copy\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.052285 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/98fd8161-ba85-49ff-bbae-48dd3925f0e1-hosts-file\") pod \"node-resolver-4x9px\" (UID: \"98fd8161-ba85-49ff-bbae-48dd3925f0e1\") " pod="openshift-dns/node-resolver-4x9px" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.054026 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/49a22b36-6ae4-4887-b364-7d1ac21ff625-proxy-tls\") pod \"machine-config-daemon-gfzkp\" (UID: \"49a22b36-6ae4-4887-b364-7d1ac21ff625\") " pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.056786 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.068879 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrz6x\" (UniqueName: \"kubernetes.io/projected/98fd8161-ba85-49ff-bbae-48dd3925f0e1-kube-api-access-wrz6x\") pod \"node-resolver-4x9px\" (UID: \"98fd8161-ba85-49ff-bbae-48dd3925f0e1\") " pod="openshift-dns/node-resolver-4x9px" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.070486 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzt5n\" (UniqueName: \"kubernetes.io/projected/c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed-kube-api-access-rzt5n\") pod \"multus-additional-cni-plugins-gc8mv\" (UID: \"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\") " pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.071271 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltjnb\" (UniqueName: \"kubernetes.io/projected/49a22b36-6ae4-4887-b364-7d1ac21ff625-kube-api-access-ltjnb\") pod \"machine-config-daemon-gfzkp\" (UID: \"49a22b36-6ae4-4887-b364-7d1ac21ff625\") " pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.072253 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf5tf\" (UniqueName: \"kubernetes.io/projected/c438e876-f4c1-42ca-b935-b5e58be9cfb2-kube-api-access-pf5tf\") pod \"multus-97hlh\" (UID: \"c438e876-f4c1-42ca-b935-b5e58be9cfb2\") " pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.073066 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.076700 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.077234 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.078654 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.079320 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.080307 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.080795 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.081390 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.082357 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.083020 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.083959 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.084203 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.084563 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.087743 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.088239 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.088871 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.089995 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.090539 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.091514 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.091893 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.092468 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.093542 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.094038 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.095003 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.095418 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.096101 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.096392 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.096783 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.097407 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.098770 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.099326 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.100383 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.100973 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.101837 4745 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.101936 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.103509 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.104338 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.104733 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.106348 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.106969 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.107021 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.107797 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.108425 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.109663 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.110211 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.111445 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.112167 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.113513 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.114197 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.115072 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.115609 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.116711 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.117192 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.118094 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.118544 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.119517 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.120120 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.120545 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.120592 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.120604 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.120620 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.120631 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:12Z","lastTransitionTime":"2026-01-27T12:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.120717 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.183230 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4x9px" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.192920 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-97hlh" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.198436 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:12:12 crc kubenswrapper[4745]: W0127 12:12:12.201487 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98fd8161_ba85_49ff_bbae_48dd3925f0e1.slice/crio-894bc7a5bf0febdccf891ce4e4b3c4ad145163caf04c8e8de05cd0e2bf8b61e9 WatchSource:0}: Error finding container 894bc7a5bf0febdccf891ce4e4b3c4ad145163caf04c8e8de05cd0e2bf8b61e9: Status 404 returned error can't find the container with id 894bc7a5bf0febdccf891ce4e4b3c4ad145163caf04c8e8de05cd0e2bf8b61e9 Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.206753 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" Jan 27 12:12:12 crc kubenswrapper[4745]: W0127 12:12:12.212566 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49a22b36_6ae4_4887_b364_7d1ac21ff625.slice/crio-08a9976474662f045046cb8a28e14efd9398ea107a2426c254d49fa9fe5c79f1 WatchSource:0}: Error finding container 08a9976474662f045046cb8a28e14efd9398ea107a2426c254d49fa9fe5c79f1: Status 404 returned error can't find the container with id 08a9976474662f045046cb8a28e14efd9398ea107a2426c254d49fa9fe5c79f1 Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.221933 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.221965 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.221976 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.221994 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.222006 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:12Z","lastTransitionTime":"2026-01-27T12:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.230368 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bnfh4"] Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.234883 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.237644 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.238522 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.238681 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.238917 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.239078 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.239317 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.239541 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.252168 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.262563 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.271109 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.279144 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.289909 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.301509 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.312136 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.321122 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.324038 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.324416 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.324430 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.324474 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.324486 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:12Z","lastTransitionTime":"2026-01-27T12:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.332191 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.341870 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.342756 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.344309 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.345407 4745 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e" exitCode=255 Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.345465 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e"} Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.345500 4745 scope.go:117] "RemoveContainer" containerID="88f6ca9e27818b5f81f8a369818de8426da86cf2a400e157129947b4115fe61f" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.348897 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23"} Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.348929 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be"} Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.350311 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-97hlh" event={"ID":"c438e876-f4c1-42ca-b935-b5e58be9cfb2","Type":"ContainerStarted","Data":"881112eb740747428b8d0087ac553b26e2045c8cc24a94b40407ed8ee53db150"} Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.351266 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4x9px" event={"ID":"98fd8161-ba85-49ff-bbae-48dd3925f0e1","Type":"ContainerStarted","Data":"894bc7a5bf0febdccf891ce4e4b3c4ad145163caf04c8e8de05cd0e2bf8b61e9"} Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.353666 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-systemd-units\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.353699 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-kubelet\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.353722 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-run-netns\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.353745 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/26b1987b-69bb-4768-a874-5a97b3327469-ovn-node-metrics-cert\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.353830 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/26b1987b-69bb-4768-a874-5a97b3327469-ovnkube-config\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.353851 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-run-systemd\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.353893 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-run-ovn\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.353919 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.353963 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-slash\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.354030 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-run-openvswitch\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.354054 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-run-ovn-kubernetes\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.354103 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-cni-bin\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.354124 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8whl8\" (UniqueName: \"kubernetes.io/projected/26b1987b-69bb-4768-a874-5a97b3327469-kube-api-access-8whl8\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.354192 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-var-lib-openvswitch\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.354217 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-etc-openvswitch\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.354237 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-log-socket\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.354258 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/26b1987b-69bb-4768-a874-5a97b3327469-ovnkube-script-lib\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.354283 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-node-log\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.354303 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-cni-netd\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.354325 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/26b1987b-69bb-4768-a874-5a97b3327469-env-overrides\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.354782 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca"} Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.355930 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" event={"ID":"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed","Type":"ContainerStarted","Data":"65cc5ac033a1bd1f87acfb151345d8092bbd478aeca57a4b0fcd95cdad4715ff"} Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.356906 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerStarted","Data":"08a9976474662f045046cb8a28e14efd9398ea107a2426c254d49fa9fe5c79f1"} Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.360393 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"999fad38ca32bcbc81a8b44ab8feb8bc34600fb703808c1d3e62d92a6be658a8"} Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.362480 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.363115 4745 scope.go:117] "RemoveContainer" containerID="4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e" Jan 27 12:12:12 crc kubenswrapper[4745]: E0127 12:12:12.363432 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.367424 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.378398 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.392636 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f6ca9e27818b5f81f8a369818de8426da86cf2a400e157129947b4115fe61f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:03Z\\\",\\\"message\\\":\\\"W0127 12:11:51.778121 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 12:11:51.778576 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769515911 cert, and key in /tmp/serving-cert-3669477915/serving-signer.crt, /tmp/serving-cert-3669477915/serving-signer.key\\\\nI0127 12:11:52.204690 1 observer_polling.go:159] Starting file observer\\\\nW0127 12:11:52.207734 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 12:11:52.207909 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:11:52.211701 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3669477915/tls.crt::/tmp/serving-cert-3669477915/tls.key\\\\\\\"\\\\nF0127 12:12:02.651226 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.403947 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.413761 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.423400 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.427034 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.427081 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.427093 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.427109 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.427118 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:12Z","lastTransitionTime":"2026-01-27T12:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.454545 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.455477 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-var-lib-openvswitch\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.455519 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-etc-openvswitch\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.455552 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-log-socket\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.455575 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/26b1987b-69bb-4768-a874-5a97b3327469-ovnkube-script-lib\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.455618 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/26b1987b-69bb-4768-a874-5a97b3327469-env-overrides\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.455651 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-node-log\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.455674 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-cni-netd\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.455774 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-systemd-units\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.455798 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-kubelet\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.455837 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-run-netns\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.455866 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/26b1987b-69bb-4768-a874-5a97b3327469-ovnkube-config\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.455981 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-etc-openvswitch\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.456031 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-var-lib-openvswitch\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.456063 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-log-socket\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.456106 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-systemd-units\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.456466 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-kubelet\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.456501 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-node-log\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.456554 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-cni-netd\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.456507 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-run-netns\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.456987 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/26b1987b-69bb-4768-a874-5a97b3327469-ovnkube-script-lib\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.456992 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/26b1987b-69bb-4768-a874-5a97b3327469-env-overrides\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.457276 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/26b1987b-69bb-4768-a874-5a97b3327469-ovnkube-config\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.457906 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/26b1987b-69bb-4768-a874-5a97b3327469-ovn-node-metrics-cert\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.457947 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-run-ovn\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.457967 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.457990 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-run-systemd\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.458018 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-slash\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.458045 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-run-openvswitch\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.458070 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-run-ovn-kubernetes\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.458089 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-cni-bin\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.458106 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8whl8\" (UniqueName: \"kubernetes.io/projected/26b1987b-69bb-4768-a874-5a97b3327469-kube-api-access-8whl8\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.458320 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-run-openvswitch\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.458328 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-slash\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.458339 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.458351 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-run-ovn-kubernetes\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.458367 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-cni-bin\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.458367 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-run-systemd\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.458385 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-run-ovn\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.462319 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/26b1987b-69bb-4768-a874-5a97b3327469-ovn-node-metrics-cert\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.469846 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.481254 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8whl8\" (UniqueName: \"kubernetes.io/projected/26b1987b-69bb-4768-a874-5a97b3327469-kube-api-access-8whl8\") pod \"ovnkube-node-bnfh4\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.490210 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.501173 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.510685 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.519680 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.526636 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.529115 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.529137 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.529145 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.529158 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.529167 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:12Z","lastTransitionTime":"2026-01-27T12:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.564414 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:12 crc kubenswrapper[4745]: W0127 12:12:12.576236 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26b1987b_69bb_4768_a874_5a97b3327469.slice/crio-497b03f46b89194ada5f7b6e50d63a3832cf7e8a6018995b5f5d73648f2dc301 WatchSource:0}: Error finding container 497b03f46b89194ada5f7b6e50d63a3832cf7e8a6018995b5f5d73648f2dc301: Status 404 returned error can't find the container with id 497b03f46b89194ada5f7b6e50d63a3832cf7e8a6018995b5f5d73648f2dc301 Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.631616 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.631672 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.631687 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.631710 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.631727 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:12Z","lastTransitionTime":"2026-01-27T12:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.659932 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:12:12 crc kubenswrapper[4745]: E0127 12:12:12.660063 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:12:14.660044283 +0000 UTC m=+27.464954971 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.660467 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.660602 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:12 crc kubenswrapper[4745]: E0127 12:12:12.660799 4745 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 12:12:12 crc kubenswrapper[4745]: E0127 12:12:12.660904 4745 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 12:12:12 crc kubenswrapper[4745]: E0127 12:12:12.661012 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:14.66098985 +0000 UTC m=+27.465900548 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 12:12:12 crc kubenswrapper[4745]: E0127 12:12:12.661144 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:14.661126404 +0000 UTC m=+27.466037172 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.734711 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.734744 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.734753 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.734767 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.734778 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:12Z","lastTransitionTime":"2026-01-27T12:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.761792 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.761869 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:12 crc kubenswrapper[4745]: E0127 12:12:12.762009 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 12:12:12 crc kubenswrapper[4745]: E0127 12:12:12.762025 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 12:12:12 crc kubenswrapper[4745]: E0127 12:12:12.762036 4745 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:12 crc kubenswrapper[4745]: E0127 12:12:12.762076 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:14.762064417 +0000 UTC m=+27.566975095 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:12 crc kubenswrapper[4745]: E0127 12:12:12.762122 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 12:12:12 crc kubenswrapper[4745]: E0127 12:12:12.762132 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 12:12:12 crc kubenswrapper[4745]: E0127 12:12:12.762138 4745 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:12 crc kubenswrapper[4745]: E0127 12:12:12.762155 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:14.762149839 +0000 UTC m=+27.567060527 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.837617 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.837653 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.837664 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.837682 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.837695 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:12Z","lastTransitionTime":"2026-01-27T12:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.939954 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.939991 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.940002 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.940018 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.940030 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:12Z","lastTransitionTime":"2026-01-27T12:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.982686 4745 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 12:12:12 crc kubenswrapper[4745]: I0127 12:12:12.993668 4745 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.012162 4745 csr.go:261] certificate signing request csr-h9q8b is approved, waiting to be issued Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.022803 4745 csr.go:257] certificate signing request csr-h9q8b is issued Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.022782 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 02:47:37.222590579 +0000 UTC Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.042784 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.042834 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.042843 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.042861 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.042871 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:13Z","lastTransitionTime":"2026-01-27T12:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.073493 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.073583 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:13 crc kubenswrapper[4745]: E0127 12:12:13.073632 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.073732 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:13 crc kubenswrapper[4745]: E0127 12:12:13.073914 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:13 crc kubenswrapper[4745]: E0127 12:12:13.074034 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.145675 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.145713 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.145724 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.145740 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.145749 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:13Z","lastTransitionTime":"2026-01-27T12:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.248195 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.248223 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.248231 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.248243 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.248253 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:13Z","lastTransitionTime":"2026-01-27T12:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.350059 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.350107 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.350118 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.350133 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.350146 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:13Z","lastTransitionTime":"2026-01-27T12:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.364538 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4x9px" event={"ID":"98fd8161-ba85-49ff-bbae-48dd3925f0e1","Type":"ContainerStarted","Data":"d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5"} Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.365986 4745 generic.go:334] "Generic (PLEG): container finished" podID="26b1987b-69bb-4768-a874-5a97b3327469" containerID="a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb" exitCode=0 Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.366018 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerDied","Data":"a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb"} Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.366061 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerStarted","Data":"497b03f46b89194ada5f7b6e50d63a3832cf7e8a6018995b5f5d73648f2dc301"} Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.368804 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.371761 4745 scope.go:117] "RemoveContainer" containerID="4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e" Jan 27 12:12:13 crc kubenswrapper[4745]: E0127 12:12:13.371942 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.373612 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-97hlh" event={"ID":"c438e876-f4c1-42ca-b935-b5e58be9cfb2","Type":"ContainerStarted","Data":"910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966"} Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.375671 4745 generic.go:334] "Generic (PLEG): container finished" podID="c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed" containerID="8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b" exitCode=0 Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.375784 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" event={"ID":"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed","Type":"ContainerDied","Data":"8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b"} Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.379032 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.380922 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerStarted","Data":"66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5"} Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.380959 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerStarted","Data":"d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865"} Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.399973 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f6ca9e27818b5f81f8a369818de8426da86cf2a400e157129947b4115fe61f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:03Z\\\",\\\"message\\\":\\\"W0127 12:11:51.778121 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 12:11:51.778576 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769515911 cert, and key in /tmp/serving-cert-3669477915/serving-signer.crt, /tmp/serving-cert-3669477915/serving-signer.key\\\\nI0127 12:11:52.204690 1 observer_polling.go:159] Starting file observer\\\\nW0127 12:11:52.207734 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 12:11:52.207909 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:11:52.211701 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3669477915/tls.crt::/tmp/serving-cert-3669477915/tls.key\\\\\\\"\\\\nF0127 12:12:02.651226 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.417789 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.435692 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.449136 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.452272 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.452296 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.452303 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.452317 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.452326 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:13Z","lastTransitionTime":"2026-01-27T12:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.462293 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.476837 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.490057 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.502459 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.515758 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.528496 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.543113 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.555119 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.555168 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.555179 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.555196 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.555209 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:13Z","lastTransitionTime":"2026-01-27T12:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.558599 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.571872 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.593504 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.607727 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.618240 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.631982 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.645278 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.657775 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.657855 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.657869 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.657886 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.657896 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:13Z","lastTransitionTime":"2026-01-27T12:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.660455 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.673206 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.685421 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.699077 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.719522 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.760305 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.760354 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.760365 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.760381 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.760398 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:13Z","lastTransitionTime":"2026-01-27T12:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.862883 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.862922 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.862932 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.862952 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.862962 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:13Z","lastTransitionTime":"2026-01-27T12:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.964617 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.964900 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.964911 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.964926 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:13 crc kubenswrapper[4745]: I0127 12:12:13.964935 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:13Z","lastTransitionTime":"2026-01-27T12:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.023790 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:00:04.923195537 +0000 UTC Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.023857 4745 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 12:07:13 +0000 UTC, rotation deadline is 2026-11-21 19:08:05.47943968 +0000 UTC Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.023914 4745 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7158h55m51.455528237s for next certificate rotation Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.066635 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.066667 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.066676 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.066689 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.066697 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:14Z","lastTransitionTime":"2026-01-27T12:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.169214 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.169296 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.169324 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.169356 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.169384 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:14Z","lastTransitionTime":"2026-01-27T12:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.271908 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.271960 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.271971 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.271987 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.271999 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:14Z","lastTransitionTime":"2026-01-27T12:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.373531 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.373563 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.373573 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.373588 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.373598 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:14Z","lastTransitionTime":"2026-01-27T12:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.385725 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" event={"ID":"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed","Type":"ContainerStarted","Data":"664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127"} Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.386857 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853"} Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.389222 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerStarted","Data":"d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce"} Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.389247 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerStarted","Data":"37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56"} Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.389258 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerStarted","Data":"2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544"} Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.400210 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.418636 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.434465 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.449787 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.541324 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.541360 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.541369 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.541382 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.541390 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:14Z","lastTransitionTime":"2026-01-27T12:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.542678 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.557042 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.570672 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.585064 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.597535 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.609425 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.624645 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.642428 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.643773 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.643797 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.643806 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.643833 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.643842 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:14Z","lastTransitionTime":"2026-01-27T12:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.656493 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.674880 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.691003 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.708197 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.732282 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.740043 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.740202 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.740236 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:14 crc kubenswrapper[4745]: E0127 12:12:14.740329 4745 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 12:12:14 crc kubenswrapper[4745]: E0127 12:12:14.740376 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:18.740358724 +0000 UTC m=+31.545269412 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 12:12:14 crc kubenswrapper[4745]: E0127 12:12:14.740433 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:12:18.740425515 +0000 UTC m=+31.545336203 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:12:14 crc kubenswrapper[4745]: E0127 12:12:14.740505 4745 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 12:12:14 crc kubenswrapper[4745]: E0127 12:12:14.740533 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:18.740525198 +0000 UTC m=+31.545435886 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.746413 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.746456 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.746467 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.746485 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.746497 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:14Z","lastTransitionTime":"2026-01-27T12:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.748625 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.785194 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.789394 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-5d8gm"] Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.789735 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5d8gm" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.794874 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.794952 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.795284 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.795294 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.802434 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.814966 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.829210 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.841075 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.841164 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:14 crc kubenswrapper[4745]: E0127 12:12:14.841241 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 12:12:14 crc kubenswrapper[4745]: E0127 12:12:14.841267 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 12:12:14 crc kubenswrapper[4745]: E0127 12:12:14.841280 4745 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:14 crc kubenswrapper[4745]: E0127 12:12:14.841313 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 12:12:14 crc kubenswrapper[4745]: E0127 12:12:14.841328 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:18.841313248 +0000 UTC m=+31.646223946 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:14 crc kubenswrapper[4745]: E0127 12:12:14.841336 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 12:12:14 crc kubenswrapper[4745]: E0127 12:12:14.841357 4745 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:14 crc kubenswrapper[4745]: E0127 12:12:14.841411 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:18.84139227 +0000 UTC m=+31.646303008 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.844803 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.848733 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.848769 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.848780 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.848796 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.848824 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:14Z","lastTransitionTime":"2026-01-27T12:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.864903 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.883396 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.898569 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.909403 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.924638 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.935037 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.942401 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/685faeae-b2b7-47a3-8da8-7fe8b2a725a5-serviceca\") pod \"node-ca-5d8gm\" (UID: \"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\") " pod="openshift-image-registry/node-ca-5d8gm" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.942444 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/685faeae-b2b7-47a3-8da8-7fe8b2a725a5-host\") pod \"node-ca-5d8gm\" (UID: \"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\") " pod="openshift-image-registry/node-ca-5d8gm" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.942507 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mgjv\" (UniqueName: \"kubernetes.io/projected/685faeae-b2b7-47a3-8da8-7fe8b2a725a5-kube-api-access-6mgjv\") pod \"node-ca-5d8gm\" (UID: \"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\") " pod="openshift-image-registry/node-ca-5d8gm" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.946502 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.950978 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.951032 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.951042 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.951056 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.951065 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:14Z","lastTransitionTime":"2026-01-27T12:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.963477 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.980988 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:14 crc kubenswrapper[4745]: I0127 12:12:14.993452 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:14Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.009626 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:15Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.023306 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:15Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.024301 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:07:58.335888187 +0000 UTC Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.038071 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:15Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.043787 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/685faeae-b2b7-47a3-8da8-7fe8b2a725a5-host\") pod \"node-ca-5d8gm\" (UID: \"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\") " pod="openshift-image-registry/node-ca-5d8gm" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.043888 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mgjv\" (UniqueName: \"kubernetes.io/projected/685faeae-b2b7-47a3-8da8-7fe8b2a725a5-kube-api-access-6mgjv\") pod \"node-ca-5d8gm\" (UID: \"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\") " pod="openshift-image-registry/node-ca-5d8gm" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.043913 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/685faeae-b2b7-47a3-8da8-7fe8b2a725a5-serviceca\") pod \"node-ca-5d8gm\" (UID: \"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\") " pod="openshift-image-registry/node-ca-5d8gm" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.043985 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/685faeae-b2b7-47a3-8da8-7fe8b2a725a5-host\") pod \"node-ca-5d8gm\" (UID: \"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\") " pod="openshift-image-registry/node-ca-5d8gm" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.044863 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/685faeae-b2b7-47a3-8da8-7fe8b2a725a5-serviceca\") pod \"node-ca-5d8gm\" (UID: \"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\") " pod="openshift-image-registry/node-ca-5d8gm" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.050823 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:15Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.053322 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.053359 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.053369 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.053385 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.053395 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:15Z","lastTransitionTime":"2026-01-27T12:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.063182 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mgjv\" (UniqueName: \"kubernetes.io/projected/685faeae-b2b7-47a3-8da8-7fe8b2a725a5-kube-api-access-6mgjv\") pod \"node-ca-5d8gm\" (UID: \"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\") " pod="openshift-image-registry/node-ca-5d8gm" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.073154 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.073186 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.073163 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:15 crc kubenswrapper[4745]: E0127 12:12:15.073290 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:15 crc kubenswrapper[4745]: E0127 12:12:15.073367 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:15 crc kubenswrapper[4745]: E0127 12:12:15.073467 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.102194 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5d8gm" Jan 27 12:12:15 crc kubenswrapper[4745]: W0127 12:12:15.143543 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod685faeae_b2b7_47a3_8da8_7fe8b2a725a5.slice/crio-bd04b3f7bd878b23be8b3a943d47faace615c34c8c6d7b5c336e4ae14e5b0e93 WatchSource:0}: Error finding container bd04b3f7bd878b23be8b3a943d47faace615c34c8c6d7b5c336e4ae14e5b0e93: Status 404 returned error can't find the container with id bd04b3f7bd878b23be8b3a943d47faace615c34c8c6d7b5c336e4ae14e5b0e93 Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.155381 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.155430 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.155441 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.155459 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.155474 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:15Z","lastTransitionTime":"2026-01-27T12:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.258195 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.258258 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.258279 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.258309 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.258332 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:15Z","lastTransitionTime":"2026-01-27T12:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.361195 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.361449 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.361457 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.361470 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.361479 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:15Z","lastTransitionTime":"2026-01-27T12:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.395010 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5d8gm" event={"ID":"685faeae-b2b7-47a3-8da8-7fe8b2a725a5","Type":"ContainerStarted","Data":"bd04b3f7bd878b23be8b3a943d47faace615c34c8c6d7b5c336e4ae14e5b0e93"} Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.396796 4745 generic.go:334] "Generic (PLEG): container finished" podID="c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed" containerID="664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127" exitCode=0 Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.396869 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" event={"ID":"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed","Type":"ContainerDied","Data":"664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127"} Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.402980 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerStarted","Data":"d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1"} Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.403054 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerStarted","Data":"3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf"} Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.403074 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerStarted","Data":"52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51"} Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.409062 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:15Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.424173 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:15Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.436525 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:15Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.444860 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:15Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.462239 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:15Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.466159 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.466197 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.466209 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.466227 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.466240 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:15Z","lastTransitionTime":"2026-01-27T12:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.477797 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:15Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.492882 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:15Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.505974 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:15Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.525876 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:15Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.539573 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:15Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.550207 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:15Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.561006 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:15Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.568733 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.568802 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.568841 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.568875 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.568891 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:15Z","lastTransitionTime":"2026-01-27T12:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.570338 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:15Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.679740 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.679805 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.679861 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.679892 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.679915 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:15Z","lastTransitionTime":"2026-01-27T12:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.783488 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.783538 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.783554 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.783583 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.783599 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:15Z","lastTransitionTime":"2026-01-27T12:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.886495 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.886882 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.886899 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.886922 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.886939 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:15Z","lastTransitionTime":"2026-01-27T12:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.988921 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.988946 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.988954 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.988966 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:15 crc kubenswrapper[4745]: I0127 12:12:15.988976 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:15Z","lastTransitionTime":"2026-01-27T12:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.024758 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 16:17:12.616517703 +0000 UTC Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.090868 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.090928 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.090947 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.090971 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.090989 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:16Z","lastTransitionTime":"2026-01-27T12:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.193370 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.193409 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.193417 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.193433 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.193442 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:16Z","lastTransitionTime":"2026-01-27T12:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.295716 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.295775 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.295792 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.295848 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.295866 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:16Z","lastTransitionTime":"2026-01-27T12:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.400034 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.400097 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.400119 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.400149 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.400170 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:16Z","lastTransitionTime":"2026-01-27T12:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.409679 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5d8gm" event={"ID":"685faeae-b2b7-47a3-8da8-7fe8b2a725a5","Type":"ContainerStarted","Data":"be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303"} Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.413676 4745 generic.go:334] "Generic (PLEG): container finished" podID="c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed" containerID="685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23" exitCode=0 Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.413741 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" event={"ID":"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed","Type":"ContainerDied","Data":"685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23"} Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.435297 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.454589 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.474773 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.504728 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.504757 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.504765 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.504778 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.504789 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:16Z","lastTransitionTime":"2026-01-27T12:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.504876 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.540633 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.560335 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.578057 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.590350 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.604801 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.607298 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.607379 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.607405 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.607438 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.607461 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:16Z","lastTransitionTime":"2026-01-27T12:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.621181 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.634138 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.648472 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.664341 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.676588 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.693419 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.709776 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.709912 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.709934 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.709964 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.709987 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:16Z","lastTransitionTime":"2026-01-27T12:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.711792 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.723206 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.739067 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.750964 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.771878 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.787837 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.809970 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.811962 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.812016 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.812029 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.812047 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.812061 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:16Z","lastTransitionTime":"2026-01-27T12:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.826065 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.844799 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.862777 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.877536 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:16Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.914976 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.915050 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.915098 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.915124 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:16 crc kubenswrapper[4745]: I0127 12:12:16.915140 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:16Z","lastTransitionTime":"2026-01-27T12:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.017771 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.017836 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.017848 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.017865 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.017875 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:17Z","lastTransitionTime":"2026-01-27T12:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.025318 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 11:47:31.313323568 +0000 UTC Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.073386 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:17 crc kubenswrapper[4745]: E0127 12:12:17.073509 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.073823 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:17 crc kubenswrapper[4745]: E0127 12:12:17.073870 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.073905 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:17 crc kubenswrapper[4745]: E0127 12:12:17.073946 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.119925 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.119988 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.120009 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.120036 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.120057 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:17Z","lastTransitionTime":"2026-01-27T12:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.222462 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.222519 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.222537 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.222559 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.222575 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:17Z","lastTransitionTime":"2026-01-27T12:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.324977 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.325010 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.325022 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.325037 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.325048 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:17Z","lastTransitionTime":"2026-01-27T12:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.331717 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.339958 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.342933 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.350615 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.362886 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.374707 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.387742 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.408434 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.420950 4745 generic.go:334] "Generic (PLEG): container finished" podID="c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed" containerID="62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0" exitCode=0 Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.421021 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" event={"ID":"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed","Type":"ContainerDied","Data":"62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0"} Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.426210 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerStarted","Data":"93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01"} Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.428160 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.428194 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.428208 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.428225 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.428241 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:17Z","lastTransitionTime":"2026-01-27T12:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.434259 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.450638 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.464300 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.480242 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.491900 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.507051 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.520280 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.532119 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.532192 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.532215 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.532246 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.532270 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:17Z","lastTransitionTime":"2026-01-27T12:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.533319 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.547099 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.559440 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.570340 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.588879 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.617353 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.632096 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.634836 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.634866 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.634880 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.634896 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.634908 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:17Z","lastTransitionTime":"2026-01-27T12:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.648341 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.662027 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.675450 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.692890 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.704595 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.720063 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.735407 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.737695 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.737748 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.737760 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.737780 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.737792 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:17Z","lastTransitionTime":"2026-01-27T12:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.748779 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.820521 4745 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.842438 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.842496 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.842514 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.842540 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.842558 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:17Z","lastTransitionTime":"2026-01-27T12:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.945225 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.945288 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.945304 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.945331 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:17 crc kubenswrapper[4745]: I0127 12:12:17.945343 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:17Z","lastTransitionTime":"2026-01-27T12:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.026178 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 03:15:05.258745374 +0000 UTC Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.047949 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.048010 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.048031 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.048056 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.048076 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:18Z","lastTransitionTime":"2026-01-27T12:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.100981 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.123501 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.143988 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.150654 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.150712 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.150731 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.150757 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.150774 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:18Z","lastTransitionTime":"2026-01-27T12:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.170284 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.195175 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.213312 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.228780 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.249616 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.254616 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.254952 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.255035 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.255115 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.255192 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:18Z","lastTransitionTime":"2026-01-27T12:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.265839 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.300027 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.322199 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.337499 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.353317 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.357063 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.357111 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.357128 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.357149 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.357164 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:18Z","lastTransitionTime":"2026-01-27T12:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.365870 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.460106 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.460438 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.460777 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.461011 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.461151 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:18Z","lastTransitionTime":"2026-01-27T12:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.563870 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.564153 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.564257 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.564364 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.564486 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:18Z","lastTransitionTime":"2026-01-27T12:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.666284 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.666314 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.666323 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.666335 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.666343 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:18Z","lastTransitionTime":"2026-01-27T12:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.769056 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.769262 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.769281 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.769305 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.769322 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:18Z","lastTransitionTime":"2026-01-27T12:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.778692 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.778911 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.778975 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:18 crc kubenswrapper[4745]: E0127 12:12:18.779122 4745 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 12:12:18 crc kubenswrapper[4745]: E0127 12:12:18.779125 4745 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 12:12:18 crc kubenswrapper[4745]: E0127 12:12:18.779219 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:12:26.779121581 +0000 UTC m=+39.584032269 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:12:18 crc kubenswrapper[4745]: E0127 12:12:18.779368 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:26.779357508 +0000 UTC m=+39.584268196 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 12:12:18 crc kubenswrapper[4745]: E0127 12:12:18.779451 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:26.77944387 +0000 UTC m=+39.584354558 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.872988 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.873334 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.873449 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.873568 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.873683 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:18Z","lastTransitionTime":"2026-01-27T12:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.880399 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.880491 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:18 crc kubenswrapper[4745]: E0127 12:12:18.880692 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 12:12:18 crc kubenswrapper[4745]: E0127 12:12:18.880724 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 12:12:18 crc kubenswrapper[4745]: E0127 12:12:18.880741 4745 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:18 crc kubenswrapper[4745]: E0127 12:12:18.880795 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:26.880777185 +0000 UTC m=+39.685687893 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:18 crc kubenswrapper[4745]: E0127 12:12:18.880695 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 12:12:18 crc kubenswrapper[4745]: E0127 12:12:18.880887 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 12:12:18 crc kubenswrapper[4745]: E0127 12:12:18.880909 4745 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:18 crc kubenswrapper[4745]: E0127 12:12:18.880980 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:26.88096113 +0000 UTC m=+39.685871858 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.977389 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.977731 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.977865 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.978020 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:18 crc kubenswrapper[4745]: I0127 12:12:18.978162 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:18Z","lastTransitionTime":"2026-01-27T12:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.026862 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 07:17:39.649301583 +0000 UTC Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.073886 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.073904 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:19 crc kubenswrapper[4745]: E0127 12:12:19.074317 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.073938 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:19 crc kubenswrapper[4745]: E0127 12:12:19.074665 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:19 crc kubenswrapper[4745]: E0127 12:12:19.074924 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.081885 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.081967 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.081997 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.082029 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.082058 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:19Z","lastTransitionTime":"2026-01-27T12:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.187288 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.189262 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.189363 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.189465 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.189546 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:19Z","lastTransitionTime":"2026-01-27T12:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.292332 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.292377 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.292393 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.292416 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.292434 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:19Z","lastTransitionTime":"2026-01-27T12:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.395198 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.395277 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.395293 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.395364 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.395573 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:19Z","lastTransitionTime":"2026-01-27T12:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.435219 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" event={"ID":"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed","Type":"ContainerStarted","Data":"89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43"} Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.470390 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.493016 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.498382 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.498437 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.498452 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.498471 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.498486 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:19Z","lastTransitionTime":"2026-01-27T12:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.508669 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.522967 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.539673 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.558572 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.570786 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.589247 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.603515 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.618693 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.623577 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.623599 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.623607 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.623619 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.623628 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:19Z","lastTransitionTime":"2026-01-27T12:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.631968 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.651228 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.665706 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.676600 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.726225 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.726267 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.726283 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.726298 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.726309 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:19Z","lastTransitionTime":"2026-01-27T12:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.828279 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.828340 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.828357 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.828381 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.828397 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:19Z","lastTransitionTime":"2026-01-27T12:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.931592 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.931635 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.931647 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.931664 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:19 crc kubenswrapper[4745]: I0127 12:12:19.931677 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:19Z","lastTransitionTime":"2026-01-27T12:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.008797 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.009548 4745 scope.go:117] "RemoveContainer" containerID="4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e" Jan 27 12:12:20 crc kubenswrapper[4745]: E0127 12:12:20.009730 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.027532 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 19:01:42.936592685 +0000 UTC Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.034860 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.034900 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.034916 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.034936 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.034951 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:20Z","lastTransitionTime":"2026-01-27T12:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.137327 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.137372 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.137384 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.137402 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.137415 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:20Z","lastTransitionTime":"2026-01-27T12:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.239692 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.239742 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.239755 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.239774 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.239787 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:20Z","lastTransitionTime":"2026-01-27T12:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.342438 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.342466 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.342473 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.342486 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.342494 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:20Z","lastTransitionTime":"2026-01-27T12:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.439541 4745 generic.go:334] "Generic (PLEG): container finished" podID="c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed" containerID="89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43" exitCode=0 Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.439621 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" event={"ID":"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed","Type":"ContainerDied","Data":"89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43"} Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.447358 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerStarted","Data":"cee78ef909058e482d1afdea7d9a6f5d5ac76036cff7b3cfbc979ec021b40e57"} Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.448174 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.448499 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.448679 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.448709 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.448717 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.448730 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.448742 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:20Z","lastTransitionTime":"2026-01-27T12:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.460956 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.482354 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.495533 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.495638 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.499158 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.526781 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.539720 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.551296 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.551337 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.551346 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.551359 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.551371 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:20Z","lastTransitionTime":"2026-01-27T12:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.553896 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.568285 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.583045 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.596137 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.612160 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.625404 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.636563 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.648702 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.653974 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.654004 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.654012 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.654043 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.654054 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:20Z","lastTransitionTime":"2026-01-27T12:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.661829 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.674666 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.684440 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.699888 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.716141 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.731771 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee78ef909058e482d1afdea7d9a6f5d5ac76036cff7b3cfbc979ec021b40e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.742891 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.754744 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.756319 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.756348 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.756356 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.756370 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.756380 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:20Z","lastTransitionTime":"2026-01-27T12:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.767874 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.780150 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.797222 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.806292 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.818753 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.834035 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.849727 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:20Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.858624 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.858676 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.858697 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.858719 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.858737 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:20Z","lastTransitionTime":"2026-01-27T12:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.962076 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.962155 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.962170 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.962193 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:20 crc kubenswrapper[4745]: I0127 12:12:20.962232 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:20Z","lastTransitionTime":"2026-01-27T12:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.027800 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 09:25:20.91562252 +0000 UTC Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.065255 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.065565 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.065576 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.065593 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.065605 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:21Z","lastTransitionTime":"2026-01-27T12:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.074080 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.074077 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.074088 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:21 crc kubenswrapper[4745]: E0127 12:12:21.074248 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:21 crc kubenswrapper[4745]: E0127 12:12:21.074472 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:21 crc kubenswrapper[4745]: E0127 12:12:21.074622 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.168914 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.168978 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.168996 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.169023 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.169042 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:21Z","lastTransitionTime":"2026-01-27T12:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.271767 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.271800 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.271821 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.271835 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.271848 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:21Z","lastTransitionTime":"2026-01-27T12:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.367448 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.367515 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.367539 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.367569 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.367591 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:21Z","lastTransitionTime":"2026-01-27T12:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:21 crc kubenswrapper[4745]: E0127 12:12:21.388472 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.392186 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.392260 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.392276 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.392298 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.392313 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:21Z","lastTransitionTime":"2026-01-27T12:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:21 crc kubenswrapper[4745]: E0127 12:12:21.411627 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.415925 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.415955 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.415967 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.415982 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.415993 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:21Z","lastTransitionTime":"2026-01-27T12:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:21 crc kubenswrapper[4745]: E0127 12:12:21.435392 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.439335 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.439397 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.439417 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.439446 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.439466 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:21Z","lastTransitionTime":"2026-01-27T12:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.452971 4745 generic.go:334] "Generic (PLEG): container finished" podID="c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed" containerID="f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e" exitCode=0 Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.453099 4745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.454013 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" event={"ID":"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed","Type":"ContainerDied","Data":"f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e"} Jan 27 12:12:21 crc kubenswrapper[4745]: E0127 12:12:21.457692 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.465117 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.465191 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.465213 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.465240 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.465271 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:21Z","lastTransitionTime":"2026-01-27T12:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.473419 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:21 crc kubenswrapper[4745]: E0127 12:12:21.482854 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:21 crc kubenswrapper[4745]: E0127 12:12:21.483234 4745 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.485409 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.485442 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.485457 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.485478 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.485492 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:21Z","lastTransitionTime":"2026-01-27T12:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.499561 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee78ef909058e482d1afdea7d9a6f5d5ac76036cff7b3cfbc979ec021b40e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.518108 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.535036 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.553585 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.568022 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.581397 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.597700 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.597741 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.597749 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.597724 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.597762 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.597771 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:21Z","lastTransitionTime":"2026-01-27T12:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.611665 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.624678 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.634875 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.647551 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.659469 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.671015 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.699915 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.699945 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.699972 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.699987 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.699996 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:21Z","lastTransitionTime":"2026-01-27T12:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.805112 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.805141 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.805151 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.805164 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.805173 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:21Z","lastTransitionTime":"2026-01-27T12:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.907753 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.907784 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.907794 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.907831 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:21 crc kubenswrapper[4745]: I0127 12:12:21.907850 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:21Z","lastTransitionTime":"2026-01-27T12:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.010442 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.010716 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.010727 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.010742 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.010755 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:22Z","lastTransitionTime":"2026-01-27T12:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.028948 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 22:39:43.271682917 +0000 UTC Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.112551 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.112585 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.112594 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.112609 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.112620 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:22Z","lastTransitionTime":"2026-01-27T12:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.215986 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.216049 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.216074 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.216105 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.216130 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:22Z","lastTransitionTime":"2026-01-27T12:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.318805 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.318861 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.318871 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.318886 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.318910 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:22Z","lastTransitionTime":"2026-01-27T12:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.421281 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.421314 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.421324 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.421338 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.421349 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:22Z","lastTransitionTime":"2026-01-27T12:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.457974 4745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.458959 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" event={"ID":"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed","Type":"ContainerStarted","Data":"eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a"} Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.475295 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.489567 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.500106 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.516735 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.523505 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.523529 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.523537 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.523549 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.523557 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:22Z","lastTransitionTime":"2026-01-27T12:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.538201 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.560716 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee78ef909058e482d1afdea7d9a6f5d5ac76036cff7b3cfbc979ec021b40e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.575895 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.588801 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.600272 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.611051 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.621560 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.625424 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.625450 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.625458 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.625472 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.625481 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:22Z","lastTransitionTime":"2026-01-27T12:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.634653 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.644875 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.660447 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.727182 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.727227 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.727242 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.727262 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.727276 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:22Z","lastTransitionTime":"2026-01-27T12:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.829848 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.829896 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.829907 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.829927 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.829940 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:22Z","lastTransitionTime":"2026-01-27T12:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.931849 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.931888 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.931919 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.931936 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:22 crc kubenswrapper[4745]: I0127 12:12:22.931948 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:22Z","lastTransitionTime":"2026-01-27T12:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.030331 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 21:46:30.110632612 +0000 UTC Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.034350 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.034411 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.034422 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.034436 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.034446 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:23Z","lastTransitionTime":"2026-01-27T12:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.073051 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.073179 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:23 crc kubenswrapper[4745]: E0127 12:12:23.073368 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.073467 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:23 crc kubenswrapper[4745]: E0127 12:12:23.073567 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:23 crc kubenswrapper[4745]: E0127 12:12:23.073657 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.137158 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.137206 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.137218 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.137235 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.137249 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:23Z","lastTransitionTime":"2026-01-27T12:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.239794 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.239909 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.239930 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.239953 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.239969 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:23Z","lastTransitionTime":"2026-01-27T12:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.342595 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.342672 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.342688 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.342726 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.342737 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:23Z","lastTransitionTime":"2026-01-27T12:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.445435 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.445478 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.445491 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.445514 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.445530 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:23Z","lastTransitionTime":"2026-01-27T12:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.548253 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.548316 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.548336 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.548361 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.548379 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:23Z","lastTransitionTime":"2026-01-27T12:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.650986 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.651052 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.651075 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.651102 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.651120 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:23Z","lastTransitionTime":"2026-01-27T12:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.753260 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.753353 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.753369 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.753391 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.753406 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:23Z","lastTransitionTime":"2026-01-27T12:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.855620 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.855663 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.855676 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.855692 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.855702 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:23Z","lastTransitionTime":"2026-01-27T12:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.958168 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.958201 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.958210 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.958224 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:23 crc kubenswrapper[4745]: I0127 12:12:23.958233 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:23Z","lastTransitionTime":"2026-01-27T12:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.031195 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 22:31:08.424249477 +0000 UTC Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.060181 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.060206 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.060214 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.060227 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.060236 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:24Z","lastTransitionTime":"2026-01-27T12:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.163004 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.163049 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.163066 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.163091 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.163108 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:24Z","lastTransitionTime":"2026-01-27T12:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.265961 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.266125 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.266206 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.266301 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.266401 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:24Z","lastTransitionTime":"2026-01-27T12:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.369982 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.370053 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.370069 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.370094 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.370125 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:24Z","lastTransitionTime":"2026-01-27T12:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.472835 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.472903 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.472922 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.472941 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.472953 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:24Z","lastTransitionTime":"2026-01-27T12:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.575538 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.575844 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.575920 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.575982 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.576050 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:24Z","lastTransitionTime":"2026-01-27T12:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.679105 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.679413 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.679430 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.679448 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.679460 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:24Z","lastTransitionTime":"2026-01-27T12:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.781786 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.781870 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.781888 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.781907 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.781921 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:24Z","lastTransitionTime":"2026-01-27T12:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.885295 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.885350 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.885371 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.885394 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.885411 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:24Z","lastTransitionTime":"2026-01-27T12:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.924400 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k"] Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.924893 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.927293 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.928044 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.940257 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:24Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.949575 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t252\" (UniqueName: \"kubernetes.io/projected/ed462537-34be-41e5-a6cb-f8e385dbcf99-kube-api-access-2t252\") pod \"ovnkube-control-plane-749d76644c-z572k\" (UID: \"ed462537-34be-41e5-a6cb-f8e385dbcf99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.949667 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ed462537-34be-41e5-a6cb-f8e385dbcf99-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-z572k\" (UID: \"ed462537-34be-41e5-a6cb-f8e385dbcf99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.949719 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ed462537-34be-41e5-a6cb-f8e385dbcf99-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-z572k\" (UID: \"ed462537-34be-41e5-a6cb-f8e385dbcf99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.949749 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ed462537-34be-41e5-a6cb-f8e385dbcf99-env-overrides\") pod \"ovnkube-control-plane-749d76644c-z572k\" (UID: \"ed462537-34be-41e5-a6cb-f8e385dbcf99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.953909 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:24Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.966217 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:24Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.979790 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:24Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.988756 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.988842 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.988857 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.988880 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.988895 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:24Z","lastTransitionTime":"2026-01-27T12:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:24 crc kubenswrapper[4745]: I0127 12:12:24.994009 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:24Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.009702 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.026365 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.031527 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 16:49:43.591313003 +0000 UTC Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.047288 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.050514 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ed462537-34be-41e5-a6cb-f8e385dbcf99-env-overrides\") pod \"ovnkube-control-plane-749d76644c-z572k\" (UID: \"ed462537-34be-41e5-a6cb-f8e385dbcf99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.050590 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t252\" (UniqueName: \"kubernetes.io/projected/ed462537-34be-41e5-a6cb-f8e385dbcf99-kube-api-access-2t252\") pod \"ovnkube-control-plane-749d76644c-z572k\" (UID: \"ed462537-34be-41e5-a6cb-f8e385dbcf99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.050638 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ed462537-34be-41e5-a6cb-f8e385dbcf99-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-z572k\" (UID: \"ed462537-34be-41e5-a6cb-f8e385dbcf99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.050695 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ed462537-34be-41e5-a6cb-f8e385dbcf99-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-z572k\" (UID: \"ed462537-34be-41e5-a6cb-f8e385dbcf99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.051527 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ed462537-34be-41e5-a6cb-f8e385dbcf99-env-overrides\") pod \"ovnkube-control-plane-749d76644c-z572k\" (UID: \"ed462537-34be-41e5-a6cb-f8e385dbcf99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.052026 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ed462537-34be-41e5-a6cb-f8e385dbcf99-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-z572k\" (UID: \"ed462537-34be-41e5-a6cb-f8e385dbcf99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.061694 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.061872 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ed462537-34be-41e5-a6cb-f8e385dbcf99-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-z572k\" (UID: \"ed462537-34be-41e5-a6cb-f8e385dbcf99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.070788 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t252\" (UniqueName: \"kubernetes.io/projected/ed462537-34be-41e5-a6cb-f8e385dbcf99-kube-api-access-2t252\") pod \"ovnkube-control-plane-749d76644c-z572k\" (UID: \"ed462537-34be-41e5-a6cb-f8e385dbcf99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.073482 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:25 crc kubenswrapper[4745]: E0127 12:12:25.073633 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.073717 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.073971 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.074124 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:25 crc kubenswrapper[4745]: E0127 12:12:25.074247 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:25 crc kubenswrapper[4745]: E0127 12:12:25.074406 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.091326 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.091874 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.091909 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.091920 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.091938 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.091950 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:25Z","lastTransitionTime":"2026-01-27T12:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.112706 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee78ef909058e482d1afdea7d9a6f5d5ac76036cff7b3cfbc979ec021b40e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.124782 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.138558 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.150164 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.194627 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.194671 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.194683 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.194701 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.194712 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:25Z","lastTransitionTime":"2026-01-27T12:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.244284 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.297416 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.297459 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.297468 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.297482 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.297492 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:25Z","lastTransitionTime":"2026-01-27T12:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.400213 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.400267 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.400277 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.400295 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.400307 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:25Z","lastTransitionTime":"2026-01-27T12:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.469208 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" event={"ID":"ed462537-34be-41e5-a6cb-f8e385dbcf99","Type":"ContainerStarted","Data":"558d7fcee54958925753351aba9e2dc413b01a020fc0bc02dc892f984aebe1c6"} Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.470756 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnfh4_26b1987b-69bb-4768-a874-5a97b3327469/ovnkube-controller/0.log" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.473411 4745 generic.go:334] "Generic (PLEG): container finished" podID="26b1987b-69bb-4768-a874-5a97b3327469" containerID="cee78ef909058e482d1afdea7d9a6f5d5ac76036cff7b3cfbc979ec021b40e57" exitCode=1 Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.473438 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerDied","Data":"cee78ef909058e482d1afdea7d9a6f5d5ac76036cff7b3cfbc979ec021b40e57"} Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.474143 4745 scope.go:117] "RemoveContainer" containerID="cee78ef909058e482d1afdea7d9a6f5d5ac76036cff7b3cfbc979ec021b40e57" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.488013 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.498751 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.504073 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.504109 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.504122 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.504139 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.504152 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:25Z","lastTransitionTime":"2026-01-27T12:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.512231 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.526231 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.545982 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee78ef909058e482d1afdea7d9a6f5d5ac76036cff7b3cfbc979ec021b40e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee78ef909058e482d1afdea7d9a6f5d5ac76036cff7b3cfbc979ec021b40e57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"opping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 12:12:24.909519 5999 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 12:12:24.909548 5999 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 12:12:24.909581 5999 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 12:12:24.909614 5999 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 12:12:24.909641 5999 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 12:12:24.909651 5999 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 12:12:24.909686 5999 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 12:12:24.909697 5999 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 12:12:24.909705 5999 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 12:12:24.909726 5999 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 12:12:24.909732 5999 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 12:12:24.909753 5999 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 12:12:24.909884 5999 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 12:12:24.909922 5999 factory.go:656] Stopping watch factory\\\\nI0127 12:12:24.909937 5999 ovnkube.go:599] Stopped ovnkube\\\\nI0127 12:12:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.560460 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.575666 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.588267 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.597976 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.607101 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.607146 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.607157 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.607172 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.607184 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:25Z","lastTransitionTime":"2026-01-27T12:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.607508 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.618070 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.628975 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.629360 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-swntl"] Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.629992 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:25 crc kubenswrapper[4745]: E0127 12:12:25.630135 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.641362 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.653636 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.657147 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7kh6\" (UniqueName: \"kubernetes.io/projected/c1811fa8-9015-4fe0-8fad-2461d64cdffd-kube-api-access-v7kh6\") pod \"network-metrics-daemon-swntl\" (UID: \"c1811fa8-9015-4fe0-8fad-2461d64cdffd\") " pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.657211 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs\") pod \"network-metrics-daemon-swntl\" (UID: \"c1811fa8-9015-4fe0-8fad-2461d64cdffd\") " pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.665126 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.680772 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.694939 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.708907 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.708945 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.708956 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.708971 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.708982 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:25Z","lastTransitionTime":"2026-01-27T12:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.710101 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.722034 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.731387 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.743168 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.753599 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.758568 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs\") pod \"network-metrics-daemon-swntl\" (UID: \"c1811fa8-9015-4fe0-8fad-2461d64cdffd\") " pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.758645 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7kh6\" (UniqueName: \"kubernetes.io/projected/c1811fa8-9015-4fe0-8fad-2461d64cdffd-kube-api-access-v7kh6\") pod \"network-metrics-daemon-swntl\" (UID: \"c1811fa8-9015-4fe0-8fad-2461d64cdffd\") " pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:25 crc kubenswrapper[4745]: E0127 12:12:25.758935 4745 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 12:12:25 crc kubenswrapper[4745]: E0127 12:12:25.758979 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs podName:c1811fa8-9015-4fe0-8fad-2461d64cdffd nodeName:}" failed. No retries permitted until 2026-01-27 12:12:26.258965678 +0000 UTC m=+39.063876376 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs") pod "network-metrics-daemon-swntl" (UID: "c1811fa8-9015-4fe0-8fad-2461d64cdffd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.768004 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.773897 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7kh6\" (UniqueName: \"kubernetes.io/projected/c1811fa8-9015-4fe0-8fad-2461d64cdffd-kube-api-access-v7kh6\") pod \"network-metrics-daemon-swntl\" (UID: \"c1811fa8-9015-4fe0-8fad-2461d64cdffd\") " pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.781848 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.803751 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee78ef909058e482d1afdea7d9a6f5d5ac76036cff7b3cfbc979ec021b40e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee78ef909058e482d1afdea7d9a6f5d5ac76036cff7b3cfbc979ec021b40e57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"opping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 12:12:24.909519 5999 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 12:12:24.909548 5999 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 12:12:24.909581 5999 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 12:12:24.909614 5999 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 12:12:24.909641 5999 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 12:12:24.909651 5999 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 12:12:24.909686 5999 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 12:12:24.909697 5999 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 12:12:24.909705 5999 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 12:12:24.909726 5999 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 12:12:24.909732 5999 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 12:12:24.909753 5999 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 12:12:24.909884 5999 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 12:12:24.909922 5999 factory.go:656] Stopping watch factory\\\\nI0127 12:12:24.909937 5999 ovnkube.go:599] Stopped ovnkube\\\\nI0127 12:12:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.811246 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.811273 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.811283 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.811297 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.811306 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:25Z","lastTransitionTime":"2026-01-27T12:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.816756 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.828539 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.841349 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.855892 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.867523 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.879276 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-swntl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1811fa8-9015-4fe0-8fad-2461d64cdffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-swntl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.914299 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.914370 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.914393 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.914425 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:25 crc kubenswrapper[4745]: I0127 12:12:25.914449 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:25Z","lastTransitionTime":"2026-01-27T12:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.017305 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.017341 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.017353 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.017369 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.017382 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:26Z","lastTransitionTime":"2026-01-27T12:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.032214 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:58:46.254038339 +0000 UTC Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.120025 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.120057 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.120065 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.120078 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.120086 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:26Z","lastTransitionTime":"2026-01-27T12:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.223105 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.223157 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.223168 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.223189 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.223201 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:26Z","lastTransitionTime":"2026-01-27T12:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.263099 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs\") pod \"network-metrics-daemon-swntl\" (UID: \"c1811fa8-9015-4fe0-8fad-2461d64cdffd\") " pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:26 crc kubenswrapper[4745]: E0127 12:12:26.263320 4745 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 12:12:26 crc kubenswrapper[4745]: E0127 12:12:26.263431 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs podName:c1811fa8-9015-4fe0-8fad-2461d64cdffd nodeName:}" failed. No retries permitted until 2026-01-27 12:12:27.263407998 +0000 UTC m=+40.068318696 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs") pod "network-metrics-daemon-swntl" (UID: "c1811fa8-9015-4fe0-8fad-2461d64cdffd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.325798 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.325890 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.325907 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.325931 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.325950 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:26Z","lastTransitionTime":"2026-01-27T12:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.428557 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.428598 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.428607 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.428622 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.428631 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:26Z","lastTransitionTime":"2026-01-27T12:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.478380 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnfh4_26b1987b-69bb-4768-a874-5a97b3327469/ovnkube-controller/0.log" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.480978 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerStarted","Data":"f3d28930a95bae9ff365f24415c1be01ef84747eaa260d29208cf8d0e1ac5c27"} Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.481276 4745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.484585 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" event={"ID":"ed462537-34be-41e5-a6cb-f8e385dbcf99","Type":"ContainerStarted","Data":"5ca350bc8a2f3ce6b6271662b78adfcbf42bdaea57a45d78132ab39d08c50562"} Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.484628 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" event={"ID":"ed462537-34be-41e5-a6cb-f8e385dbcf99","Type":"ContainerStarted","Data":"1cd28c1e66e65534a92e21787b94ac2fc6ba53cfd7e70a50a54bea18ddfc3797"} Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.495965 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.508584 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.520005 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.531432 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.531482 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.531497 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.531513 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.531524 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:26Z","lastTransitionTime":"2026-01-27T12:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.532231 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.545686 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.561171 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.580268 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.601182 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d28930a95bae9ff365f24415c1be01ef84747eaa260d29208cf8d0e1ac5c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee78ef909058e482d1afdea7d9a6f5d5ac76036cff7b3cfbc979ec021b40e57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"opping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 12:12:24.909519 5999 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 12:12:24.909548 5999 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 12:12:24.909581 5999 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 12:12:24.909614 5999 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 12:12:24.909641 5999 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 12:12:24.909651 5999 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 12:12:24.909686 5999 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 12:12:24.909697 5999 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 12:12:24.909705 5999 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 12:12:24.909726 5999 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 12:12:24.909732 5999 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 12:12:24.909753 5999 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 12:12:24.909884 5999 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 12:12:24.909922 5999 factory.go:656] Stopping watch factory\\\\nI0127 12:12:24.909937 5999 ovnkube.go:599] Stopped ovnkube\\\\nI0127 12:12:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.613742 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.627746 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.635199 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.635243 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.635257 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.635274 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.635285 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:26Z","lastTransitionTime":"2026-01-27T12:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.640725 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.650336 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.664011 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.674516 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-swntl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1811fa8-9015-4fe0-8fad-2461d64cdffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-swntl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.728296 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.737417 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.739106 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.739155 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.739171 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.739190 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.739203 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:26Z","lastTransitionTime":"2026-01-27T12:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.746908 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd28c1e66e65534a92e21787b94ac2fc6ba53cfd7e70a50a54bea18ddfc3797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca350bc8a2f3ce6b6271662b78adfcbf42bdaea57a45d78132ab39d08c50562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.756328 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-swntl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1811fa8-9015-4fe0-8fad-2461d64cdffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-swntl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.772021 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.783938 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.797671 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.811322 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.827718 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.837977 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.841747 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.841774 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.841782 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.841796 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.841828 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:26Z","lastTransitionTime":"2026-01-27T12:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.853774 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.867306 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.870058 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.870215 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:26 crc kubenswrapper[4745]: E0127 12:12:26.870262 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:12:42.870233212 +0000 UTC m=+55.675143900 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:12:26 crc kubenswrapper[4745]: E0127 12:12:26.870301 4745 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 12:12:26 crc kubenswrapper[4745]: E0127 12:12:26.870364 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:42.870347375 +0000 UTC m=+55.675258063 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.870423 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:26 crc kubenswrapper[4745]: E0127 12:12:26.870557 4745 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 12:12:26 crc kubenswrapper[4745]: E0127 12:12:26.870618 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:42.870606572 +0000 UTC m=+55.675517330 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.879243 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.894247 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.916367 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d28930a95bae9ff365f24415c1be01ef84747eaa260d29208cf8d0e1ac5c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee78ef909058e482d1afdea7d9a6f5d5ac76036cff7b3cfbc979ec021b40e57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"opping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 12:12:24.909519 5999 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 12:12:24.909548 5999 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 12:12:24.909581 5999 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 12:12:24.909614 5999 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 12:12:24.909641 5999 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 12:12:24.909651 5999 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 12:12:24.909686 5999 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 12:12:24.909697 5999 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 12:12:24.909705 5999 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 12:12:24.909726 5999 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 12:12:24.909732 5999 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 12:12:24.909753 5999 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 12:12:24.909884 5999 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 12:12:24.909922 5999 factory.go:656] Stopping watch factory\\\\nI0127 12:12:24.909937 5999 ovnkube.go:599] Stopped ovnkube\\\\nI0127 12:12:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.931102 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.942213 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.947045 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.947108 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.947128 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.947212 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.947260 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:26Z","lastTransitionTime":"2026-01-27T12:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.959848 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.971302 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:26 crc kubenswrapper[4745]: I0127 12:12:26.971343 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:26 crc kubenswrapper[4745]: E0127 12:12:26.971475 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 12:12:26 crc kubenswrapper[4745]: E0127 12:12:26.971491 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 12:12:26 crc kubenswrapper[4745]: E0127 12:12:26.971501 4745 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:26 crc kubenswrapper[4745]: E0127 12:12:26.971539 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:42.971526675 +0000 UTC m=+55.776437353 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:26 crc kubenswrapper[4745]: E0127 12:12:26.971476 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 12:12:26 crc kubenswrapper[4745]: E0127 12:12:26.971574 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 12:12:26 crc kubenswrapper[4745]: E0127 12:12:26.971587 4745 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:26 crc kubenswrapper[4745]: E0127 12:12:26.971619 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 12:12:42.971607718 +0000 UTC m=+55.776518406 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.033374 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 20:02:21.765723096 +0000 UTC Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.049991 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.050042 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.050057 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.050075 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.050087 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:27Z","lastTransitionTime":"2026-01-27T12:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.073070 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.073128 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.073128 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.073118 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:27 crc kubenswrapper[4745]: E0127 12:12:27.073211 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:27 crc kubenswrapper[4745]: E0127 12:12:27.073329 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:27 crc kubenswrapper[4745]: E0127 12:12:27.073626 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:12:27 crc kubenswrapper[4745]: E0127 12:12:27.073414 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.152615 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.152677 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.152693 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.153075 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.153127 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:27Z","lastTransitionTime":"2026-01-27T12:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.256082 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.256185 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.256207 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.256238 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.256257 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:27Z","lastTransitionTime":"2026-01-27T12:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.275775 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs\") pod \"network-metrics-daemon-swntl\" (UID: \"c1811fa8-9015-4fe0-8fad-2461d64cdffd\") " pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:27 crc kubenswrapper[4745]: E0127 12:12:27.276036 4745 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 12:12:27 crc kubenswrapper[4745]: E0127 12:12:27.276149 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs podName:c1811fa8-9015-4fe0-8fad-2461d64cdffd nodeName:}" failed. No retries permitted until 2026-01-27 12:12:29.276129276 +0000 UTC m=+42.081039974 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs") pod "network-metrics-daemon-swntl" (UID: "c1811fa8-9015-4fe0-8fad-2461d64cdffd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.358840 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.358892 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.358911 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.358975 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.358994 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:27Z","lastTransitionTime":"2026-01-27T12:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.462227 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.462301 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.462326 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.462358 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.462382 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:27Z","lastTransitionTime":"2026-01-27T12:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.490138 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnfh4_26b1987b-69bb-4768-a874-5a97b3327469/ovnkube-controller/1.log" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.490640 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnfh4_26b1987b-69bb-4768-a874-5a97b3327469/ovnkube-controller/0.log" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.494775 4745 generic.go:334] "Generic (PLEG): container finished" podID="26b1987b-69bb-4768-a874-5a97b3327469" containerID="f3d28930a95bae9ff365f24415c1be01ef84747eaa260d29208cf8d0e1ac5c27" exitCode=1 Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.494879 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerDied","Data":"f3d28930a95bae9ff365f24415c1be01ef84747eaa260d29208cf8d0e1ac5c27"} Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.494953 4745 scope.go:117] "RemoveContainer" containerID="cee78ef909058e482d1afdea7d9a6f5d5ac76036cff7b3cfbc979ec021b40e57" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.496074 4745 scope.go:117] "RemoveContainer" containerID="f3d28930a95bae9ff365f24415c1be01ef84747eaa260d29208cf8d0e1ac5c27" Jan 27 12:12:27 crc kubenswrapper[4745]: E0127 12:12:27.496280 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bnfh4_openshift-ovn-kubernetes(26b1987b-69bb-4768-a874-5a97b3327469)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" podUID="26b1987b-69bb-4768-a874-5a97b3327469" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.509162 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.524646 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.546200 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.564797 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.564859 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.564874 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.564895 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.564909 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:27Z","lastTransitionTime":"2026-01-27T12:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.577259 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d28930a95bae9ff365f24415c1be01ef84747eaa260d29208cf8d0e1ac5c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee78ef909058e482d1afdea7d9a6f5d5ac76036cff7b3cfbc979ec021b40e57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"opping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 12:12:24.909519 5999 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 12:12:24.909548 5999 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 12:12:24.909581 5999 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 12:12:24.909614 5999 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 12:12:24.909641 5999 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 12:12:24.909651 5999 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 12:12:24.909686 5999 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 12:12:24.909697 5999 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 12:12:24.909705 5999 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 12:12:24.909726 5999 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 12:12:24.909732 5999 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 12:12:24.909753 5999 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 12:12:24.909884 5999 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 12:12:24.909922 5999 factory.go:656] Stopping watch factory\\\\nI0127 12:12:24.909937 5999 ovnkube.go:599] Stopped ovnkube\\\\nI0127 12:12:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d28930a95bae9ff365f24415c1be01ef84747eaa260d29208cf8d0e1ac5c27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"message\\\":\\\"b0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 12:12:26.726327 6202 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0127 12:12:26.726306 6202 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}\\\\nI0127 12:12:26.726246 6202 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 12:12:26.726405 6202 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 55.74855ms\\\\nF0127 12:12:26.726478 6202 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call w\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.591028 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.607308 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.619156 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.632332 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd28c1e66e65534a92e21787b94ac2fc6ba53cfd7e70a50a54bea18ddfc3797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca350bc8a2f3ce6b6271662b78adfcbf42bdaea57a45d78132ab39d08c50562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.645480 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-swntl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1811fa8-9015-4fe0-8fad-2461d64cdffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-swntl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.663713 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.668210 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.668262 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.668275 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.668292 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.668303 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:27Z","lastTransitionTime":"2026-01-27T12:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.679886 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.691094 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.701322 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.710720 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.722416 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.735100 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.770772 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.770841 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.770854 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.770870 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.770881 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:27Z","lastTransitionTime":"2026-01-27T12:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.873646 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.873698 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.873710 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.873732 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.873745 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:27Z","lastTransitionTime":"2026-01-27T12:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.976745 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.976834 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.976856 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.976876 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:27 crc kubenswrapper[4745]: I0127 12:12:27.976891 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:27Z","lastTransitionTime":"2026-01-27T12:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.034242 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:35:20.238014068 +0000 UTC Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.079126 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.079206 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.079233 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.079267 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.079293 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:28Z","lastTransitionTime":"2026-01-27T12:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.094203 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.109938 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.124433 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.147767 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.169998 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.182593 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.182649 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.182666 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.182690 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.182706 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:28Z","lastTransitionTime":"2026-01-27T12:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.187553 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.210292 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.230574 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.251654 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d28930a95bae9ff365f24415c1be01ef84747eaa260d29208cf8d0e1ac5c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee78ef909058e482d1afdea7d9a6f5d5ac76036cff7b3cfbc979ec021b40e57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"opping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 12:12:24.909519 5999 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 12:12:24.909548 5999 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 12:12:24.909581 5999 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 12:12:24.909614 5999 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 12:12:24.909641 5999 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 12:12:24.909651 5999 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 12:12:24.909686 5999 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 12:12:24.909697 5999 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 12:12:24.909705 5999 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 12:12:24.909726 5999 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 12:12:24.909732 5999 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 12:12:24.909753 5999 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 12:12:24.909884 5999 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 12:12:24.909922 5999 factory.go:656] Stopping watch factory\\\\nI0127 12:12:24.909937 5999 ovnkube.go:599] Stopped ovnkube\\\\nI0127 12:12:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d28930a95bae9ff365f24415c1be01ef84747eaa260d29208cf8d0e1ac5c27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"message\\\":\\\"b0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 12:12:26.726327 6202 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0127 12:12:26.726306 6202 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}\\\\nI0127 12:12:26.726246 6202 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 12:12:26.726405 6202 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 55.74855ms\\\\nF0127 12:12:26.726478 6202 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call w\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.265558 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.281789 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.285871 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.285942 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.285954 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.285976 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.285990 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:28Z","lastTransitionTime":"2026-01-27T12:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.295455 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.311708 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd28c1e66e65534a92e21787b94ac2fc6ba53cfd7e70a50a54bea18ddfc3797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca350bc8a2f3ce6b6271662b78adfcbf42bdaea57a45d78132ab39d08c50562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.329907 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-swntl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1811fa8-9015-4fe0-8fad-2461d64cdffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-swntl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.343699 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.357960 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.389020 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.389086 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.389104 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.389131 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.389149 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:28Z","lastTransitionTime":"2026-01-27T12:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.491595 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.491663 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.491685 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.491709 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.491725 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:28Z","lastTransitionTime":"2026-01-27T12:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.510715 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnfh4_26b1987b-69bb-4768-a874-5a97b3327469/ovnkube-controller/1.log" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.601258 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.601358 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.601397 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.601435 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.601461 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:28Z","lastTransitionTime":"2026-01-27T12:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.706495 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.706597 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.706623 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.706656 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.706679 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:28Z","lastTransitionTime":"2026-01-27T12:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.809199 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.809245 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.809260 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.809280 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.809295 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:28Z","lastTransitionTime":"2026-01-27T12:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.911394 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.911434 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.911444 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.911458 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:28 crc kubenswrapper[4745]: I0127 12:12:28.911468 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:28Z","lastTransitionTime":"2026-01-27T12:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.013355 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.013397 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.013410 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.013426 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.013438 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:29Z","lastTransitionTime":"2026-01-27T12:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.035460 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 04:32:22.613144838 +0000 UTC Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.073362 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.073441 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.073473 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:29 crc kubenswrapper[4745]: E0127 12:12:29.073622 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:29 crc kubenswrapper[4745]: E0127 12:12:29.073733 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.073734 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:29 crc kubenswrapper[4745]: E0127 12:12:29.073798 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:29 crc kubenswrapper[4745]: E0127 12:12:29.074021 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.116093 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.116141 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.116155 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.116172 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.116181 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:29Z","lastTransitionTime":"2026-01-27T12:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.219882 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.219952 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.219976 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.219998 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.220013 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:29Z","lastTransitionTime":"2026-01-27T12:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.298354 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs\") pod \"network-metrics-daemon-swntl\" (UID: \"c1811fa8-9015-4fe0-8fad-2461d64cdffd\") " pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:29 crc kubenswrapper[4745]: E0127 12:12:29.298535 4745 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 12:12:29 crc kubenswrapper[4745]: E0127 12:12:29.298645 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs podName:c1811fa8-9015-4fe0-8fad-2461d64cdffd nodeName:}" failed. No retries permitted until 2026-01-27 12:12:33.298617758 +0000 UTC m=+46.103528496 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs") pod "network-metrics-daemon-swntl" (UID: "c1811fa8-9015-4fe0-8fad-2461d64cdffd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.323233 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.323270 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.323282 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.323298 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.323309 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:29Z","lastTransitionTime":"2026-01-27T12:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.426157 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.426233 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.426251 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.426271 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.426285 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:29Z","lastTransitionTime":"2026-01-27T12:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.529286 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.529327 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.529341 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.529366 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.529380 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:29Z","lastTransitionTime":"2026-01-27T12:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.632927 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.633025 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.633040 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.633065 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.633080 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:29Z","lastTransitionTime":"2026-01-27T12:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.736192 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.736251 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.736268 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.736289 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.736304 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:29Z","lastTransitionTime":"2026-01-27T12:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.838179 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.838223 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.838232 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.838250 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.838259 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:29Z","lastTransitionTime":"2026-01-27T12:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.940920 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.940974 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.940986 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.941005 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:29 crc kubenswrapper[4745]: I0127 12:12:29.941019 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:29Z","lastTransitionTime":"2026-01-27T12:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.036153 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 16:25:55.423196289 +0000 UTC Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.043795 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.043883 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.043907 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.043934 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.043951 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:30Z","lastTransitionTime":"2026-01-27T12:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.146469 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.146538 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.146561 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.146599 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.146623 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:30Z","lastTransitionTime":"2026-01-27T12:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.249615 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.249667 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.249679 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.249696 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.249716 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:30Z","lastTransitionTime":"2026-01-27T12:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.351636 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.351681 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.351692 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.351714 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.351728 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:30Z","lastTransitionTime":"2026-01-27T12:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.453932 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.453983 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.453994 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.454013 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.454038 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:30Z","lastTransitionTime":"2026-01-27T12:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.556074 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.556112 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.556121 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.556138 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.556150 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:30Z","lastTransitionTime":"2026-01-27T12:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.658128 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.658175 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.658188 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.658206 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.658217 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:30Z","lastTransitionTime":"2026-01-27T12:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.760598 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.760643 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.760655 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.760673 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.760684 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:30Z","lastTransitionTime":"2026-01-27T12:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.863294 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.863336 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.863350 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.863367 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.863378 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:30Z","lastTransitionTime":"2026-01-27T12:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.965955 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.965992 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.966002 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.966016 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:30 crc kubenswrapper[4745]: I0127 12:12:30.966025 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:30Z","lastTransitionTime":"2026-01-27T12:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.037146 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:49:01.513043706 +0000 UTC Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.068465 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.068508 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.068518 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.068532 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.068541 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:31Z","lastTransitionTime":"2026-01-27T12:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.073695 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.073732 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.073715 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.073708 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:31 crc kubenswrapper[4745]: E0127 12:12:31.073822 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:12:31 crc kubenswrapper[4745]: E0127 12:12:31.073920 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:31 crc kubenswrapper[4745]: E0127 12:12:31.074086 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:31 crc kubenswrapper[4745]: E0127 12:12:31.074155 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.171647 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.171703 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.171717 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.171736 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.171751 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:31Z","lastTransitionTime":"2026-01-27T12:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.274678 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.274709 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.274717 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.274730 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.274740 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:31Z","lastTransitionTime":"2026-01-27T12:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.376928 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.376960 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.376976 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.376991 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.377000 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:31Z","lastTransitionTime":"2026-01-27T12:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.479734 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.479776 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.479790 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.479810 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.480035 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:31Z","lastTransitionTime":"2026-01-27T12:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.582914 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.583859 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.584012 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.584107 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.584186 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:31Z","lastTransitionTime":"2026-01-27T12:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.687297 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.687522 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.687643 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.687750 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.687853 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:31Z","lastTransitionTime":"2026-01-27T12:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.790977 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.791031 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.791043 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.791060 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.791072 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:31Z","lastTransitionTime":"2026-01-27T12:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.800657 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.800705 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.800714 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.800732 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.800745 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:31Z","lastTransitionTime":"2026-01-27T12:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:31 crc kubenswrapper[4745]: E0127 12:12:31.813978 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.818609 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.818801 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.818908 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.818997 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.819075 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:31Z","lastTransitionTime":"2026-01-27T12:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:31 crc kubenswrapper[4745]: E0127 12:12:31.839377 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.844729 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.844793 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.844809 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.844848 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.844862 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:31Z","lastTransitionTime":"2026-01-27T12:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:31 crc kubenswrapper[4745]: E0127 12:12:31.858943 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.862523 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.862796 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.862980 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.863180 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.863356 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:31Z","lastTransitionTime":"2026-01-27T12:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:31 crc kubenswrapper[4745]: E0127 12:12:31.876593 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.880237 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.880381 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.880471 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.880570 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.880671 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:31Z","lastTransitionTime":"2026-01-27T12:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:31 crc kubenswrapper[4745]: E0127 12:12:31.895235 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:31 crc kubenswrapper[4745]: E0127 12:12:31.895351 4745 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.896708 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.896735 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.896744 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.896757 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.896767 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:31Z","lastTransitionTime":"2026-01-27T12:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.999506 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.999868 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:31 crc kubenswrapper[4745]: I0127 12:12:31.999962 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.000054 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.000159 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:32Z","lastTransitionTime":"2026-01-27T12:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.038058 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 20:47:19.290443344 +0000 UTC Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.102416 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.102448 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.102456 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.102470 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.102479 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:32Z","lastTransitionTime":"2026-01-27T12:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.204343 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.204653 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.204779 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.204909 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.204991 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:32Z","lastTransitionTime":"2026-01-27T12:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.306706 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.306736 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.306744 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.306757 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.306765 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:32Z","lastTransitionTime":"2026-01-27T12:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.409337 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.409380 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.409390 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.409403 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.409411 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:32Z","lastTransitionTime":"2026-01-27T12:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.512698 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.512759 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.512777 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.512803 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.512849 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:32Z","lastTransitionTime":"2026-01-27T12:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.614638 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.614686 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.614698 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.614716 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.614728 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:32Z","lastTransitionTime":"2026-01-27T12:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.717517 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.717569 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.717584 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.717603 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.717615 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:32Z","lastTransitionTime":"2026-01-27T12:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.820207 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.820253 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.820265 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.820281 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.820293 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:32Z","lastTransitionTime":"2026-01-27T12:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.922949 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.923006 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.923031 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.923054 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:32 crc kubenswrapper[4745]: I0127 12:12:32.923068 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:32Z","lastTransitionTime":"2026-01-27T12:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.026075 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.026116 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.026126 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.026142 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.026153 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:33Z","lastTransitionTime":"2026-01-27T12:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.038693 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:58:15.553769628 +0000 UTC Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.073447 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.073484 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.073584 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.073475 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:33 crc kubenswrapper[4745]: E0127 12:12:33.073721 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:12:33 crc kubenswrapper[4745]: E0127 12:12:33.073944 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:33 crc kubenswrapper[4745]: E0127 12:12:33.074022 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:33 crc kubenswrapper[4745]: E0127 12:12:33.074103 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.128693 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.128756 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.128772 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.128793 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.128831 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:33Z","lastTransitionTime":"2026-01-27T12:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.231103 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.231143 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.231153 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.231167 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.231178 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:33Z","lastTransitionTime":"2026-01-27T12:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.332693 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.332725 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.332736 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.332753 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.332763 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:33Z","lastTransitionTime":"2026-01-27T12:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.338100 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs\") pod \"network-metrics-daemon-swntl\" (UID: \"c1811fa8-9015-4fe0-8fad-2461d64cdffd\") " pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:33 crc kubenswrapper[4745]: E0127 12:12:33.338204 4745 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 12:12:33 crc kubenswrapper[4745]: E0127 12:12:33.338250 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs podName:c1811fa8-9015-4fe0-8fad-2461d64cdffd nodeName:}" failed. No retries permitted until 2026-01-27 12:12:41.338234931 +0000 UTC m=+54.143145619 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs") pod "network-metrics-daemon-swntl" (UID: "c1811fa8-9015-4fe0-8fad-2461d64cdffd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.435038 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.435106 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.435119 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.435143 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.435159 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:33Z","lastTransitionTime":"2026-01-27T12:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.537938 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.538000 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.538019 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.538043 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.538057 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:33Z","lastTransitionTime":"2026-01-27T12:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.640887 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.641124 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.641135 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.641150 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.641161 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:33Z","lastTransitionTime":"2026-01-27T12:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.743574 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.743621 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.743631 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.743648 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.743658 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:33Z","lastTransitionTime":"2026-01-27T12:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.845778 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.845852 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.845868 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.845884 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.845895 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:33Z","lastTransitionTime":"2026-01-27T12:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.948057 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.948123 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.948140 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.948161 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:33 crc kubenswrapper[4745]: I0127 12:12:33.948177 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:33Z","lastTransitionTime":"2026-01-27T12:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.039157 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 13:24:34.506297176 +0000 UTC Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.051265 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.051326 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.051345 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.051370 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.051388 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:34Z","lastTransitionTime":"2026-01-27T12:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.074875 4745 scope.go:117] "RemoveContainer" containerID="4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.155731 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.156220 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.156251 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.156284 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.156321 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:34Z","lastTransitionTime":"2026-01-27T12:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.258328 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.258378 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.258390 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.258407 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.258419 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:34Z","lastTransitionTime":"2026-01-27T12:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.360223 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.360247 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.360256 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.360268 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.360277 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:34Z","lastTransitionTime":"2026-01-27T12:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.462416 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.462471 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.462496 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.462522 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.462541 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:34Z","lastTransitionTime":"2026-01-27T12:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.544243 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.547886 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0a075761afe1cb61a52a5a40380f0fd9eef24c6f1edc5e9cb4c1a70a6039fdce"} Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.565128 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.565185 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.565204 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.565228 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.565246 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:34Z","lastTransitionTime":"2026-01-27T12:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.668763 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.668837 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.668848 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.668872 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.668884 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:34Z","lastTransitionTime":"2026-01-27T12:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.771893 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.771943 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.771955 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.771969 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.771980 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:34Z","lastTransitionTime":"2026-01-27T12:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.874359 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.874481 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.874491 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.874505 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.874547 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:34Z","lastTransitionTime":"2026-01-27T12:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.977920 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.977977 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.977997 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.978021 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:34 crc kubenswrapper[4745]: I0127 12:12:34.978038 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:34Z","lastTransitionTime":"2026-01-27T12:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.039667 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:46:11.986097417 +0000 UTC Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.072862 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.072912 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:35 crc kubenswrapper[4745]: E0127 12:12:35.072988 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.072875 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.072868 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:35 crc kubenswrapper[4745]: E0127 12:12:35.073156 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:35 crc kubenswrapper[4745]: E0127 12:12:35.073239 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:35 crc kubenswrapper[4745]: E0127 12:12:35.073390 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.080575 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.080615 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.080624 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.080636 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.080646 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:35Z","lastTransitionTime":"2026-01-27T12:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.183829 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.183901 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.183923 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.183943 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.183958 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:35Z","lastTransitionTime":"2026-01-27T12:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.286593 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.286643 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.286655 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.286672 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.286684 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:35Z","lastTransitionTime":"2026-01-27T12:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.389096 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.389147 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.389159 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.389177 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.389201 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:35Z","lastTransitionTime":"2026-01-27T12:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.492253 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.492281 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.492290 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.492302 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.492311 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:35Z","lastTransitionTime":"2026-01-27T12:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.551342 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.573722 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a075761afe1cb61a52a5a40380f0fd9eef24c6f1edc5e9cb4c1a70a6039fdce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.591158 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.594249 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.594362 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.594381 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.594405 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.594423 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:35Z","lastTransitionTime":"2026-01-27T12:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.602514 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.616738 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.636919 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d28930a95bae9ff365f24415c1be01ef84747eaa260d29208cf8d0e1ac5c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee78ef909058e482d1afdea7d9a6f5d5ac76036cff7b3cfbc979ec021b40e57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"opping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 12:12:24.909519 5999 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 12:12:24.909548 5999 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 12:12:24.909581 5999 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 12:12:24.909614 5999 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 12:12:24.909641 5999 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 12:12:24.909651 5999 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 12:12:24.909686 5999 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 12:12:24.909697 5999 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 12:12:24.909705 5999 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 12:12:24.909726 5999 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 12:12:24.909732 5999 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 12:12:24.909753 5999 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 12:12:24.909884 5999 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 12:12:24.909922 5999 factory.go:656] Stopping watch factory\\\\nI0127 12:12:24.909937 5999 ovnkube.go:599] Stopped ovnkube\\\\nI0127 12:12:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d28930a95bae9ff365f24415c1be01ef84747eaa260d29208cf8d0e1ac5c27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"message\\\":\\\"b0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 12:12:26.726327 6202 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0127 12:12:26.726306 6202 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}\\\\nI0127 12:12:26.726246 6202 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 12:12:26.726405 6202 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 55.74855ms\\\\nF0127 12:12:26.726478 6202 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call w\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.650657 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.661853 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.675304 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.686408 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd28c1e66e65534a92e21787b94ac2fc6ba53cfd7e70a50a54bea18ddfc3797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca350bc8a2f3ce6b6271662b78adfcbf42bdaea57a45d78132ab39d08c50562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.695054 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-swntl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1811fa8-9015-4fe0-8fad-2461d64cdffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-swntl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.696843 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.696868 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.696878 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.696891 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.696902 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:35Z","lastTransitionTime":"2026-01-27T12:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.706331 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.716757 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.728313 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.738398 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.749575 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.761222 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.799010 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.799074 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.799087 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.799106 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.799442 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:35Z","lastTransitionTime":"2026-01-27T12:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.902379 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.902447 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.902462 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.902513 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:35 crc kubenswrapper[4745]: I0127 12:12:35.902532 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:35Z","lastTransitionTime":"2026-01-27T12:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.005138 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.005185 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.005201 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.005224 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.005240 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:36Z","lastTransitionTime":"2026-01-27T12:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.040288 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:03:58.039864882 +0000 UTC Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.109128 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.109190 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.109208 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.109232 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.109251 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:36Z","lastTransitionTime":"2026-01-27T12:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.213201 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.213245 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.213263 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.213281 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.213292 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:36Z","lastTransitionTime":"2026-01-27T12:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.320365 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.321486 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.321580 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.321621 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.321647 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:36Z","lastTransitionTime":"2026-01-27T12:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.425420 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.425503 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.425523 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.425555 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.425576 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:36Z","lastTransitionTime":"2026-01-27T12:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.529134 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.529184 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.529195 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.529211 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.529222 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:36Z","lastTransitionTime":"2026-01-27T12:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.633117 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.633180 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.633193 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.633212 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.633226 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:36Z","lastTransitionTime":"2026-01-27T12:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.736287 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.736361 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.736388 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.736419 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.736439 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:36Z","lastTransitionTime":"2026-01-27T12:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.839709 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.839761 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.839779 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.839801 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.839893 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:36Z","lastTransitionTime":"2026-01-27T12:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.943044 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.943101 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.943114 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.943133 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:36 crc kubenswrapper[4745]: I0127 12:12:36.943146 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:36Z","lastTransitionTime":"2026-01-27T12:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.041429 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 08:13:38.868587405 +0000 UTC Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.045282 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.045340 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.045356 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.045378 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.045392 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:37Z","lastTransitionTime":"2026-01-27T12:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.073177 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.073233 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.073177 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.073250 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:37 crc kubenswrapper[4745]: E0127 12:12:37.073398 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:12:37 crc kubenswrapper[4745]: E0127 12:12:37.073576 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:37 crc kubenswrapper[4745]: E0127 12:12:37.073711 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:37 crc kubenswrapper[4745]: E0127 12:12:37.073906 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.148760 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.149013 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.149040 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.149875 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.149915 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:37Z","lastTransitionTime":"2026-01-27T12:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.253476 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.253535 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.253552 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.253578 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.253596 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:37Z","lastTransitionTime":"2026-01-27T12:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.356888 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.356958 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.356983 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.357014 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.357038 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:37Z","lastTransitionTime":"2026-01-27T12:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.459283 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.459348 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.459369 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.459398 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.459420 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:37Z","lastTransitionTime":"2026-01-27T12:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.562257 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.562360 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.562387 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.562418 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.562439 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:37Z","lastTransitionTime":"2026-01-27T12:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.668572 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.668636 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.668650 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.668667 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.668681 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:37Z","lastTransitionTime":"2026-01-27T12:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.773017 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.773100 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.773129 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.773160 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.773182 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:37Z","lastTransitionTime":"2026-01-27T12:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.876140 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.876172 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.876182 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.876198 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.876208 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:37Z","lastTransitionTime":"2026-01-27T12:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.978879 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.978936 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.978950 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.978968 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:37 crc kubenswrapper[4745]: I0127 12:12:37.978980 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:37Z","lastTransitionTime":"2026-01-27T12:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.042159 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 06:37:06.813124249 +0000 UTC Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.081906 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.081968 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.081987 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.082011 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.082030 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:38Z","lastTransitionTime":"2026-01-27T12:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.092633 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:38Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.108879 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:38Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.127126 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a075761afe1cb61a52a5a40380f0fd9eef24c6f1edc5e9cb4c1a70a6039fdce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:38Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.144638 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:38Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.158591 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:38Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.172444 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:38Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.184110 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.184231 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.184250 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.184270 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.184286 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:38Z","lastTransitionTime":"2026-01-27T12:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.190912 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:38Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.215785 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d28930a95bae9ff365f24415c1be01ef84747eaa260d29208cf8d0e1ac5c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cee78ef909058e482d1afdea7d9a6f5d5ac76036cff7b3cfbc979ec021b40e57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"opping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 12:12:24.909519 5999 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 12:12:24.909548 5999 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 12:12:24.909581 5999 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 12:12:24.909614 5999 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 12:12:24.909641 5999 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 12:12:24.909651 5999 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 12:12:24.909686 5999 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 12:12:24.909697 5999 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 12:12:24.909705 5999 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 12:12:24.909726 5999 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 12:12:24.909732 5999 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 12:12:24.909753 5999 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 12:12:24.909884 5999 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 12:12:24.909922 5999 factory.go:656] Stopping watch factory\\\\nI0127 12:12:24.909937 5999 ovnkube.go:599] Stopped ovnkube\\\\nI0127 12:12:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d28930a95bae9ff365f24415c1be01ef84747eaa260d29208cf8d0e1ac5c27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"message\\\":\\\"b0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 12:12:26.726327 6202 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0127 12:12:26.726306 6202 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}\\\\nI0127 12:12:26.726246 6202 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 12:12:26.726405 6202 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 55.74855ms\\\\nF0127 12:12:26.726478 6202 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call w\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:38Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.234550 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:38Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.248444 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:38Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.260158 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:38Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.274290 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd28c1e66e65534a92e21787b94ac2fc6ba53cfd7e70a50a54bea18ddfc3797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca350bc8a2f3ce6b6271662b78adfcbf42bdaea57a45d78132ab39d08c50562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:38Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.287061 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.287132 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.287023 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-swntl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1811fa8-9015-4fe0-8fad-2461d64cdffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-swntl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:38Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.287152 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.287350 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.287384 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:38Z","lastTransitionTime":"2026-01-27T12:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.301859 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:38Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.317041 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:38Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.335669 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:38Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.390667 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.390732 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.390751 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.390775 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.390795 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:38Z","lastTransitionTime":"2026-01-27T12:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.493517 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.493605 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.493629 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.493664 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.493688 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:38Z","lastTransitionTime":"2026-01-27T12:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.596589 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.596659 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.596670 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.596688 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.596702 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:38Z","lastTransitionTime":"2026-01-27T12:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.700061 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.700109 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.700123 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.700141 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.700155 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:38Z","lastTransitionTime":"2026-01-27T12:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.807482 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.808627 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.808639 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.808661 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.808675 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:38Z","lastTransitionTime":"2026-01-27T12:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.910986 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.911094 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.911117 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.911143 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:38 crc kubenswrapper[4745]: I0127 12:12:38.911161 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:38Z","lastTransitionTime":"2026-01-27T12:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.014140 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.014204 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.014218 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.014235 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.014246 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:39Z","lastTransitionTime":"2026-01-27T12:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.043151 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:55:54.36160362 +0000 UTC Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.073133 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.073169 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.073154 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.073229 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:39 crc kubenswrapper[4745]: E0127 12:12:39.073359 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:12:39 crc kubenswrapper[4745]: E0127 12:12:39.073496 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:39 crc kubenswrapper[4745]: E0127 12:12:39.073574 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:39 crc kubenswrapper[4745]: E0127 12:12:39.073642 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.117476 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.117537 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.117554 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.117578 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.117598 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:39Z","lastTransitionTime":"2026-01-27T12:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.220370 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.220436 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.220454 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.220474 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.220486 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:39Z","lastTransitionTime":"2026-01-27T12:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.323621 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.323665 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.323675 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.323692 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.323703 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:39Z","lastTransitionTime":"2026-01-27T12:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.426682 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.426740 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.426748 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.426761 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.426771 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:39Z","lastTransitionTime":"2026-01-27T12:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.529511 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.529573 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.529590 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.529616 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.529641 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:39Z","lastTransitionTime":"2026-01-27T12:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.631506 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.631553 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.631564 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.631584 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.631597 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:39Z","lastTransitionTime":"2026-01-27T12:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.734663 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.734729 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.734754 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.734845 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.734866 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:39Z","lastTransitionTime":"2026-01-27T12:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.837574 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.837609 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.837617 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.837630 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.837639 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:39Z","lastTransitionTime":"2026-01-27T12:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.940563 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.940626 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.940645 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.940663 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:39 crc kubenswrapper[4745]: I0127 12:12:39.940674 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:39Z","lastTransitionTime":"2026-01-27T12:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.043117 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.043208 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.043233 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.043267 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.043291 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:40Z","lastTransitionTime":"2026-01-27T12:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.043360 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 13:58:08.123592202 +0000 UTC Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.075142 4745 scope.go:117] "RemoveContainer" containerID="f3d28930a95bae9ff365f24415c1be01ef84747eaa260d29208cf8d0e1ac5c27" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.087752 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-swntl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1811fa8-9015-4fe0-8fad-2461d64cdffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-swntl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.100933 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.117174 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.132015 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.146301 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.146366 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.146377 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.146392 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.146402 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:40Z","lastTransitionTime":"2026-01-27T12:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.156217 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.168945 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd28c1e66e65534a92e21787b94ac2fc6ba53cfd7e70a50a54bea18ddfc3797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca350bc8a2f3ce6b6271662b78adfcbf42bdaea57a45d78132ab39d08c50562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.181884 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.192889 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.208299 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a075761afe1cb61a52a5a40380f0fd9eef24c6f1edc5e9cb4c1a70a6039fdce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.222635 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.233522 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.249003 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.249064 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.249077 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.249094 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.249106 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:40Z","lastTransitionTime":"2026-01-27T12:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.256272 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d28930a95bae9ff365f24415c1be01ef84747eaa260d29208cf8d0e1ac5c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d28930a95bae9ff365f24415c1be01ef84747eaa260d29208cf8d0e1ac5c27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"message\\\":\\\"b0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 12:12:26.726327 6202 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0127 12:12:26.726306 6202 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}\\\\nI0127 12:12:26.726246 6202 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 12:12:26.726405 6202 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 55.74855ms\\\\nF0127 12:12:26.726478 6202 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call w\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bnfh4_openshift-ovn-kubernetes(26b1987b-69bb-4768-a874-5a97b3327469)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.269332 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.280541 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.291675 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.304885 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.351740 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.351774 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.351783 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.351796 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.351805 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:40Z","lastTransitionTime":"2026-01-27T12:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.453801 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.453878 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.453893 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.453912 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.453922 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:40Z","lastTransitionTime":"2026-01-27T12:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.481678 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.556576 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.556620 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.556632 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.556649 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.556663 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:40Z","lastTransitionTime":"2026-01-27T12:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.570704 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnfh4_26b1987b-69bb-4768-a874-5a97b3327469/ovnkube-controller/1.log" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.574279 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerStarted","Data":"3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448"} Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.574860 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.595898 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.609905 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.633978 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a075761afe1cb61a52a5a40380f0fd9eef24c6f1edc5e9cb4c1a70a6039fdce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.649326 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.659874 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.659972 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.660004 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.660052 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.663035 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:40Z","lastTransitionTime":"2026-01-27T12:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.673348 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.699926 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d28930a95bae9ff365f24415c1be01ef84747eaa260d29208cf8d0e1ac5c27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"message\\\":\\\"b0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 12:12:26.726327 6202 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0127 12:12:26.726306 6202 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}\\\\nI0127 12:12:26.726246 6202 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 12:12:26.726405 6202 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 55.74855ms\\\\nF0127 12:12:26.726478 6202 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call w\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.714930 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.728386 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.741892 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.756732 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.765597 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.765654 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.765666 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.765684 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.765696 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:40Z","lastTransitionTime":"2026-01-27T12:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.771654 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-swntl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1811fa8-9015-4fe0-8fad-2461d64cdffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-swntl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.786474 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.801054 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.818158 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.831065 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.846646 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd28c1e66e65534a92e21787b94ac2fc6ba53cfd7e70a50a54bea18ddfc3797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca350bc8a2f3ce6b6271662b78adfcbf42bdaea57a45d78132ab39d08c50562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:40Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.868677 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.868716 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.868725 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.868739 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.868749 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:40Z","lastTransitionTime":"2026-01-27T12:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.971118 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.971167 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.971179 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.971197 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:40 crc kubenswrapper[4745]: I0127 12:12:40.971210 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:40Z","lastTransitionTime":"2026-01-27T12:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.043977 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 14:38:34.912336396 +0000 UTC Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.072827 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.072859 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:41 crc kubenswrapper[4745]: E0127 12:12:41.072951 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.072840 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:41 crc kubenswrapper[4745]: E0127 12:12:41.073059 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.073057 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:41 crc kubenswrapper[4745]: E0127 12:12:41.073159 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:41 crc kubenswrapper[4745]: E0127 12:12:41.073282 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.074212 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.074239 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.074249 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.074271 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.074281 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:41Z","lastTransitionTime":"2026-01-27T12:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.176092 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.176131 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.176141 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.176158 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.176169 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:41Z","lastTransitionTime":"2026-01-27T12:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.279930 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.280024 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.280051 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.280079 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.280101 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:41Z","lastTransitionTime":"2026-01-27T12:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.346369 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs\") pod \"network-metrics-daemon-swntl\" (UID: \"c1811fa8-9015-4fe0-8fad-2461d64cdffd\") " pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:41 crc kubenswrapper[4745]: E0127 12:12:41.346581 4745 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 12:12:41 crc kubenswrapper[4745]: E0127 12:12:41.346746 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs podName:c1811fa8-9015-4fe0-8fad-2461d64cdffd nodeName:}" failed. No retries permitted until 2026-01-27 12:12:57.346682508 +0000 UTC m=+70.151593246 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs") pod "network-metrics-daemon-swntl" (UID: "c1811fa8-9015-4fe0-8fad-2461d64cdffd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.384442 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.384508 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.384525 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.384548 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.384566 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:41Z","lastTransitionTime":"2026-01-27T12:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.487171 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.487228 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.487240 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.487258 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.487272 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:41Z","lastTransitionTime":"2026-01-27T12:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.579017 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnfh4_26b1987b-69bb-4768-a874-5a97b3327469/ovnkube-controller/2.log" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.580014 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnfh4_26b1987b-69bb-4768-a874-5a97b3327469/ovnkube-controller/1.log" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.582589 4745 generic.go:334] "Generic (PLEG): container finished" podID="26b1987b-69bb-4768-a874-5a97b3327469" containerID="3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448" exitCode=1 Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.582635 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerDied","Data":"3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448"} Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.582674 4745 scope.go:117] "RemoveContainer" containerID="f3d28930a95bae9ff365f24415c1be01ef84747eaa260d29208cf8d0e1ac5c27" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.583628 4745 scope.go:117] "RemoveContainer" containerID="3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448" Jan 27 12:12:41 crc kubenswrapper[4745]: E0127 12:12:41.583958 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bnfh4_openshift-ovn-kubernetes(26b1987b-69bb-4768-a874-5a97b3327469)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" podUID="26b1987b-69bb-4768-a874-5a97b3327469" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.590003 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.590037 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.590046 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.590060 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.590075 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:41Z","lastTransitionTime":"2026-01-27T12:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.606674 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.624700 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.643226 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.659036 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.674163 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd28c1e66e65534a92e21787b94ac2fc6ba53cfd7e70a50a54bea18ddfc3797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca350bc8a2f3ce6b6271662b78adfcbf42bdaea57a45d78132ab39d08c50562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.691572 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-swntl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1811fa8-9015-4fe0-8fad-2461d64cdffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-swntl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.693075 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.693113 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.693146 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.693165 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.693178 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:41Z","lastTransitionTime":"2026-01-27T12:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.709924 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.721989 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.735595 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a075761afe1cb61a52a5a40380f0fd9eef24c6f1edc5e9cb4c1a70a6039fdce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.752564 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.762940 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.778709 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.792990 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.798761 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.798844 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.798863 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.798890 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.798910 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:41Z","lastTransitionTime":"2026-01-27T12:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.811007 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.831960 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.861166 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d28930a95bae9ff365f24415c1be01ef84747eaa260d29208cf8d0e1ac5c27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"message\\\":\\\"b0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 12:12:26.726327 6202 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0127 12:12:26.726306 6202 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}\\\\nI0127 12:12:26.726246 6202 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 12:12:26.726405 6202 services_controller.go:360] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace for network=default : 55.74855ms\\\\nF0127 12:12:26.726478 6202 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call w\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:41Z\\\",\\\"message\\\":\\\"nformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z]\\\\nI0127 12:12:41.101837 6422 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 12:12:41.101854 6422 model_client.go:382] Update o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.901792 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.901876 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.901887 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.901907 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.901921 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:41Z","lastTransitionTime":"2026-01-27T12:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.977376 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.977428 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.977441 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.977458 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:41 crc kubenswrapper[4745]: I0127 12:12:41.977471 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:41Z","lastTransitionTime":"2026-01-27T12:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:41 crc kubenswrapper[4745]: E0127 12:12:41.994987 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.000265 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.000327 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.000344 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.000371 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.000388 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:42Z","lastTransitionTime":"2026-01-27T12:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:42 crc kubenswrapper[4745]: E0127 12:12:42.019146 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:42Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.023754 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.023849 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.023869 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.023894 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.023912 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:42Z","lastTransitionTime":"2026-01-27T12:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.045033 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 02:15:34.600273758 +0000 UTC Jan 27 12:12:42 crc kubenswrapper[4745]: E0127 12:12:42.045988 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:42Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.050434 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.050491 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.050508 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.050532 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.050550 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:42Z","lastTransitionTime":"2026-01-27T12:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:42 crc kubenswrapper[4745]: E0127 12:12:42.069990 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:42Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.074992 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.075028 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.075042 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.075067 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.075082 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:42Z","lastTransitionTime":"2026-01-27T12:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:42 crc kubenswrapper[4745]: E0127 12:12:42.088946 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:42Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:42 crc kubenswrapper[4745]: E0127 12:12:42.089082 4745 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.090938 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.090968 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.090979 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.090996 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.091008 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:42Z","lastTransitionTime":"2026-01-27T12:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.193120 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.193161 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.193173 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.193188 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.193197 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:42Z","lastTransitionTime":"2026-01-27T12:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.296880 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.296925 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.296937 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.296956 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.296967 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:42Z","lastTransitionTime":"2026-01-27T12:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.399904 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.399958 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.399971 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.400030 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.400047 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:42Z","lastTransitionTime":"2026-01-27T12:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.503545 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.503620 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.503640 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.503666 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.503691 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:42Z","lastTransitionTime":"2026-01-27T12:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.588941 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnfh4_26b1987b-69bb-4768-a874-5a97b3327469/ovnkube-controller/2.log" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.593745 4745 scope.go:117] "RemoveContainer" containerID="3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448" Jan 27 12:12:42 crc kubenswrapper[4745]: E0127 12:12:42.594116 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bnfh4_openshift-ovn-kubernetes(26b1987b-69bb-4768-a874-5a97b3327469)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" podUID="26b1987b-69bb-4768-a874-5a97b3327469" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.606627 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.606658 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.606668 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.606687 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.606699 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:42Z","lastTransitionTime":"2026-01-27T12:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.613443 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a075761afe1cb61a52a5a40380f0fd9eef24c6f1edc5e9cb4c1a70a6039fdce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:42Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.631030 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:42Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.645388 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:42Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.662502 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:42Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.679521 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:42Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.697410 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:42Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.713334 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.713379 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.713392 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.713412 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.713425 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:42Z","lastTransitionTime":"2026-01-27T12:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.717576 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:42Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.736714 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:41Z\\\",\\\"message\\\":\\\"nformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z]\\\\nI0127 12:12:41.101837 6422 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 12:12:41.101854 6422 model_client.go:382] Update o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bnfh4_openshift-ovn-kubernetes(26b1987b-69bb-4768-a874-5a97b3327469)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:42Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.760652 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:42Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.776514 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:42Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.789683 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:42Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.804622 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:42Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.817949 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.817986 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.817996 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.818018 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.818030 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:42Z","lastTransitionTime":"2026-01-27T12:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.824212 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd28c1e66e65534a92e21787b94ac2fc6ba53cfd7e70a50a54bea18ddfc3797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca350bc8a2f3ce6b6271662b78adfcbf42bdaea57a45d78132ab39d08c50562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:42Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.837403 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-swntl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1811fa8-9015-4fe0-8fad-2461d64cdffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-swntl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:42Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.858776 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:42Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.874605 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:42Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.920097 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.920478 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.920632 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.920783 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.920938 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:42Z","lastTransitionTime":"2026-01-27T12:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.963639 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:12:42 crc kubenswrapper[4745]: E0127 12:12:42.963803 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:13:14.963771161 +0000 UTC m=+87.768681879 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.963902 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:42 crc kubenswrapper[4745]: I0127 12:12:42.963974 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:42 crc kubenswrapper[4745]: E0127 12:12:42.964052 4745 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 12:12:42 crc kubenswrapper[4745]: E0127 12:12:42.964164 4745 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 12:12:42 crc kubenswrapper[4745]: E0127 12:12:42.964192 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 12:13:14.964140181 +0000 UTC m=+87.769050879 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 12:12:42 crc kubenswrapper[4745]: E0127 12:12:42.964247 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 12:13:14.964226154 +0000 UTC m=+87.769136882 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.024691 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.024756 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.024780 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.024838 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.024862 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:43Z","lastTransitionTime":"2026-01-27T12:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.048664 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 00:36:05.938862174 +0000 UTC Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.065491 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.065578 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:43 crc kubenswrapper[4745]: E0127 12:12:43.065750 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 12:12:43 crc kubenswrapper[4745]: E0127 12:12:43.065782 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 12:12:43 crc kubenswrapper[4745]: E0127 12:12:43.065800 4745 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:43 crc kubenswrapper[4745]: E0127 12:12:43.065750 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 12:12:43 crc kubenswrapper[4745]: E0127 12:12:43.065871 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 12:12:43 crc kubenswrapper[4745]: E0127 12:12:43.065883 4745 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:43 crc kubenswrapper[4745]: E0127 12:12:43.065885 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 12:13:15.065861636 +0000 UTC m=+87.870772324 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:43 crc kubenswrapper[4745]: E0127 12:12:43.065910 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 12:13:15.065901747 +0000 UTC m=+87.870812425 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.073379 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.073419 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:43 crc kubenswrapper[4745]: E0127 12:12:43.073573 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.073595 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:43 crc kubenswrapper[4745]: E0127 12:12:43.073726 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:12:43 crc kubenswrapper[4745]: E0127 12:12:43.074051 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.074439 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:43 crc kubenswrapper[4745]: E0127 12:12:43.074550 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.127736 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.127798 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.127833 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.127859 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.127875 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:43Z","lastTransitionTime":"2026-01-27T12:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.231198 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.231271 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.231289 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.231315 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.231333 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:43Z","lastTransitionTime":"2026-01-27T12:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.334778 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.334868 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.334887 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.334912 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.334931 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:43Z","lastTransitionTime":"2026-01-27T12:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.437859 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.438329 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.438349 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.438376 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.438395 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:43Z","lastTransitionTime":"2026-01-27T12:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.542695 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.542770 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.542792 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.542915 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.542940 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:43Z","lastTransitionTime":"2026-01-27T12:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.646592 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.646658 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.646676 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.646706 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.646729 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:43Z","lastTransitionTime":"2026-01-27T12:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.749671 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.749731 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.749753 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.749782 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.749802 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:43Z","lastTransitionTime":"2026-01-27T12:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.853134 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.853204 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.853231 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.853261 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.853288 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:43Z","lastTransitionTime":"2026-01-27T12:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.892482 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.905429 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.908865 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:43Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.921673 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:43Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.934358 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a075761afe1cb61a52a5a40380f0fd9eef24c6f1edc5e9cb4c1a70a6039fdce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:43Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.948456 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:43Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.956983 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.957317 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.957531 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.957715 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.957945 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:43Z","lastTransitionTime":"2026-01-27T12:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.961578 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:43Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.975746 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:43Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:43 crc kubenswrapper[4745]: I0127 12:12:43.989030 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:43Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.005098 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:44Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.021080 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:44Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.046127 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:41Z\\\",\\\"message\\\":\\\"nformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z]\\\\nI0127 12:12:41.101837 6422 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 12:12:41.101854 6422 model_client.go:382] Update o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bnfh4_openshift-ovn-kubernetes(26b1987b-69bb-4768-a874-5a97b3327469)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:44Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.049412 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 04:27:59.60751044 +0000 UTC Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.060913 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.060991 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.061020 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.061048 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.061071 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:44Z","lastTransitionTime":"2026-01-27T12:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.072122 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:44Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.090050 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:44Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.111948 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:44Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.124687 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:44Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.140366 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd28c1e66e65534a92e21787b94ac2fc6ba53cfd7e70a50a54bea18ddfc3797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca350bc8a2f3ce6b6271662b78adfcbf42bdaea57a45d78132ab39d08c50562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:44Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.153636 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-swntl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1811fa8-9015-4fe0-8fad-2461d64cdffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-swntl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:44Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.164040 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.164212 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.164492 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.164792 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.164884 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:44Z","lastTransitionTime":"2026-01-27T12:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.268520 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.268569 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.268588 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.268611 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.268629 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:44Z","lastTransitionTime":"2026-01-27T12:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.371459 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.371536 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.371571 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.371600 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.371621 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:44Z","lastTransitionTime":"2026-01-27T12:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.474720 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.474779 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.474798 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.474865 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.474891 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:44Z","lastTransitionTime":"2026-01-27T12:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.578570 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.578632 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.578643 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.578659 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.578669 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:44Z","lastTransitionTime":"2026-01-27T12:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.681649 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.681700 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.681717 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.681740 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.681760 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:44Z","lastTransitionTime":"2026-01-27T12:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.784416 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.784492 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.784556 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.784598 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.784619 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:44Z","lastTransitionTime":"2026-01-27T12:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.888227 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.888314 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.888334 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.888358 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.888379 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:44Z","lastTransitionTime":"2026-01-27T12:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.991034 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.991101 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.991119 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.991147 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:44 crc kubenswrapper[4745]: I0127 12:12:44.991170 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:44Z","lastTransitionTime":"2026-01-27T12:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.049728 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 23:55:51.687976699 +0000 UTC Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.073242 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.073292 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.073255 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.073416 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:45 crc kubenswrapper[4745]: E0127 12:12:45.073516 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:45 crc kubenswrapper[4745]: E0127 12:12:45.073623 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:45 crc kubenswrapper[4745]: E0127 12:12:45.074126 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:45 crc kubenswrapper[4745]: E0127 12:12:45.074965 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.093388 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.093427 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.093440 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.093458 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.093470 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:45Z","lastTransitionTime":"2026-01-27T12:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.196879 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.196945 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.196957 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.196996 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.197009 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:45Z","lastTransitionTime":"2026-01-27T12:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.299570 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.299660 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.299679 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.299731 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.299750 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:45Z","lastTransitionTime":"2026-01-27T12:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.403506 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.403578 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.403600 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.403664 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.403687 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:45Z","lastTransitionTime":"2026-01-27T12:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.506719 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.507199 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.507364 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.507560 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.507741 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:45Z","lastTransitionTime":"2026-01-27T12:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.611148 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.611190 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.611199 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.611212 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.611221 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:45Z","lastTransitionTime":"2026-01-27T12:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.713647 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.713690 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.713700 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.713717 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.713727 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:45Z","lastTransitionTime":"2026-01-27T12:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.816733 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.816772 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.816781 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.816796 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.816831 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:45Z","lastTransitionTime":"2026-01-27T12:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.920094 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.920152 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.920162 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.920185 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:45 crc kubenswrapper[4745]: I0127 12:12:45.920198 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:45Z","lastTransitionTime":"2026-01-27T12:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.022654 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.022720 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.022734 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.022756 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.022775 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:46Z","lastTransitionTime":"2026-01-27T12:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.050058 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 00:30:06.137033735 +0000 UTC Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.125634 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.125672 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.125683 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.125898 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.125910 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:46Z","lastTransitionTime":"2026-01-27T12:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.228530 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.228615 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.228628 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.228647 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.228659 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:46Z","lastTransitionTime":"2026-01-27T12:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.331901 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.331965 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.331982 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.332005 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.332022 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:46Z","lastTransitionTime":"2026-01-27T12:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.434777 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.434865 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.434877 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.434892 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.434903 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:46Z","lastTransitionTime":"2026-01-27T12:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.537933 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.537965 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.537974 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.537988 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.537998 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:46Z","lastTransitionTime":"2026-01-27T12:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.640786 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.640901 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.640927 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.640959 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.640983 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:46Z","lastTransitionTime":"2026-01-27T12:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.743981 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.744068 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.744088 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.744110 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.744126 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:46Z","lastTransitionTime":"2026-01-27T12:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.847371 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.847421 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.847437 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.847455 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.847468 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:46Z","lastTransitionTime":"2026-01-27T12:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.950388 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.950433 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.950446 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.950463 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:46 crc kubenswrapper[4745]: I0127 12:12:46.950477 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:46Z","lastTransitionTime":"2026-01-27T12:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.050453 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 20:38:42.541387158 +0000 UTC Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.053433 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.053470 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.053488 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.053506 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.053517 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:47Z","lastTransitionTime":"2026-01-27T12:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.073904 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.073961 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:47 crc kubenswrapper[4745]: E0127 12:12:47.074043 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.073923 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:47 crc kubenswrapper[4745]: E0127 12:12:47.074218 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.074290 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:47 crc kubenswrapper[4745]: E0127 12:12:47.074423 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:47 crc kubenswrapper[4745]: E0127 12:12:47.074528 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.156453 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.156494 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.156504 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.156523 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.156533 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:47Z","lastTransitionTime":"2026-01-27T12:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.259400 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.259437 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.259448 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.259463 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.259474 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:47Z","lastTransitionTime":"2026-01-27T12:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.361901 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.361955 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.361967 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.361985 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.361997 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:47Z","lastTransitionTime":"2026-01-27T12:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.399296 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.414128 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.425355 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.439418 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49be802e-401f-41a8-aa5c-ae1d63523f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082334b7a9de2a2744faf97640bf056777c5f2da7cf5ff825121305a40dbf6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51afaee5ac37b7ae8bf400d42e8d2d6a3e12687ea9e080d7faa4e09dff5bf758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e1b927f3b132dddf8fc7a708f4eaff3596ea17106764af4369e50b0165a373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15cb80506071a3f2c69004a1281de26c5d8e3ab484c430453248f65e3c61b25f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cb80506071a3f2c69004a1281de26c5d8e3ab484c430453248f65e3c61b25f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.451645 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.464668 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.464712 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.464721 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.464736 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.464749 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:47Z","lastTransitionTime":"2026-01-27T12:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.466344 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a075761afe1cb61a52a5a40380f0fd9eef24c6f1edc5e9cb4c1a70a6039fdce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.479994 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.490130 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.502845 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.516528 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.534643 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:41Z\\\",\\\"message\\\":\\\"nformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z]\\\\nI0127 12:12:41.101837 6422 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 12:12:41.101854 6422 model_client.go:382] Update o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bnfh4_openshift-ovn-kubernetes(26b1987b-69bb-4768-a874-5a97b3327469)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.549327 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.566588 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.567570 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.567608 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.567623 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.567641 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.567655 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:47Z","lastTransitionTime":"2026-01-27T12:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.578604 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.593332 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd28c1e66e65534a92e21787b94ac2fc6ba53cfd7e70a50a54bea18ddfc3797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca350bc8a2f3ce6b6271662b78adfcbf42bdaea57a45d78132ab39d08c50562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.607205 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-swntl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1811fa8-9015-4fe0-8fad-2461d64cdffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-swntl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.619492 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.636604 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.670316 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.670350 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.670358 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.670371 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.670379 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:47Z","lastTransitionTime":"2026-01-27T12:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.773658 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.773715 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.773725 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.773741 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.773753 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:47Z","lastTransitionTime":"2026-01-27T12:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.876216 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.876270 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.876281 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.876294 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:47 crc kubenswrapper[4745]: I0127 12:12:47.876303 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:47Z","lastTransitionTime":"2026-01-27T12:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.066272 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 14:15:22.338193243 +0000 UTC Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.075847 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.076771 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.076783 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.076803 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.076839 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:48Z","lastTransitionTime":"2026-01-27T12:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.099088 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:48Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.116136 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:48Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.129916 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:48Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.142967 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:48Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.158738 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd28c1e66e65534a92e21787b94ac2fc6ba53cfd7e70a50a54bea18ddfc3797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca350bc8a2f3ce6b6271662b78adfcbf42bdaea57a45d78132ab39d08c50562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:48Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.171732 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-swntl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1811fa8-9015-4fe0-8fad-2461d64cdffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-swntl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:48Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.179646 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.179692 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.179704 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.179724 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.179737 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:48Z","lastTransitionTime":"2026-01-27T12:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.185113 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49be802e-401f-41a8-aa5c-ae1d63523f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082334b7a9de2a2744faf97640bf056777c5f2da7cf5ff825121305a40dbf6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51afaee5ac37b7ae8bf400d42e8d2d6a3e12687ea9e080d7faa4e09dff5bf758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e1b927f3b132dddf8fc7a708f4eaff3596ea17106764af4369e50b0165a373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15cb80506071a3f2c69004a1281de26c5d8e3ab484c430453248f65e3c61b25f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cb80506071a3f2c69004a1281de26c5d8e3ab484c430453248f65e3c61b25f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:48Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.202003 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:48Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.215757 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:48Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.233991 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a075761afe1cb61a52a5a40380f0fd9eef24c6f1edc5e9cb4c1a70a6039fdce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:48Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.248148 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:48Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.263427 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:48Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.282123 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:48Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.282758 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.282863 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.282900 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.282929 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.282951 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:48Z","lastTransitionTime":"2026-01-27T12:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.299667 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:48Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.314375 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:48Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.331270 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:48Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.352448 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:41Z\\\",\\\"message\\\":\\\"nformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z]\\\\nI0127 12:12:41.101837 6422 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 12:12:41.101854 6422 model_client.go:382] Update o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bnfh4_openshift-ovn-kubernetes(26b1987b-69bb-4768-a874-5a97b3327469)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:48Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.386150 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.386204 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.386214 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.386237 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.386247 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:48Z","lastTransitionTime":"2026-01-27T12:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.490795 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.490858 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.490868 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.490886 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.490898 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:48Z","lastTransitionTime":"2026-01-27T12:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.594306 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.594368 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.594387 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.594413 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.594432 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:48Z","lastTransitionTime":"2026-01-27T12:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.697131 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.697184 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.697193 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.697208 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.697217 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:48Z","lastTransitionTime":"2026-01-27T12:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.799607 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.799649 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.799661 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.799677 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.799688 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:48Z","lastTransitionTime":"2026-01-27T12:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.902470 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.902866 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.902880 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.902901 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:48 crc kubenswrapper[4745]: I0127 12:12:48.902915 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:48Z","lastTransitionTime":"2026-01-27T12:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.005929 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.006013 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.006037 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.006069 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.006089 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:49Z","lastTransitionTime":"2026-01-27T12:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.066990 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 00:03:18.804368036 +0000 UTC Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.073370 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.073406 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.073383 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.073365 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:49 crc kubenswrapper[4745]: E0127 12:12:49.073506 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:49 crc kubenswrapper[4745]: E0127 12:12:49.073617 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:12:49 crc kubenswrapper[4745]: E0127 12:12:49.073709 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:49 crc kubenswrapper[4745]: E0127 12:12:49.073766 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.108444 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.108485 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.108496 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.108513 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.108524 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:49Z","lastTransitionTime":"2026-01-27T12:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.211309 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.211352 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.211361 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.211373 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.211384 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:49Z","lastTransitionTime":"2026-01-27T12:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.313270 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.313298 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.313305 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.313320 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.313329 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:49Z","lastTransitionTime":"2026-01-27T12:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.416191 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.416230 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.416241 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.416257 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.416267 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:49Z","lastTransitionTime":"2026-01-27T12:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.519512 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.519569 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.519591 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.519615 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.519634 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:49Z","lastTransitionTime":"2026-01-27T12:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.621929 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.621985 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.621996 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.622013 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.622026 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:49Z","lastTransitionTime":"2026-01-27T12:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.724082 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.724142 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.724157 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.724178 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.724194 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:49Z","lastTransitionTime":"2026-01-27T12:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.827534 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.827597 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.827615 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.827639 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.827656 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:49Z","lastTransitionTime":"2026-01-27T12:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.930264 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.930374 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.930393 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.930417 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:49 crc kubenswrapper[4745]: I0127 12:12:49.930437 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:49Z","lastTransitionTime":"2026-01-27T12:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.033288 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.033369 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.033394 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.033425 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.033447 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:50Z","lastTransitionTime":"2026-01-27T12:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.067713 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 00:03:01.420789101 +0000 UTC Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.136834 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.136882 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.136899 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.136921 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.136937 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:50Z","lastTransitionTime":"2026-01-27T12:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.240177 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.240232 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.240243 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.240262 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.240273 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:50Z","lastTransitionTime":"2026-01-27T12:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.343145 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.343189 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.343206 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.343229 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.343249 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:50Z","lastTransitionTime":"2026-01-27T12:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.446151 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.446222 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.446239 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.446264 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.446284 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:50Z","lastTransitionTime":"2026-01-27T12:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.548531 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.548582 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.548599 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.548623 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.548640 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:50Z","lastTransitionTime":"2026-01-27T12:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.652258 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.652313 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.652327 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.652347 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.652358 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:50Z","lastTransitionTime":"2026-01-27T12:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.755583 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.755626 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.755636 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.755650 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.755659 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:50Z","lastTransitionTime":"2026-01-27T12:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.858537 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.858585 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.858597 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.858618 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.858630 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:50Z","lastTransitionTime":"2026-01-27T12:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.961624 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.961689 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.961708 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.961736 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:50 crc kubenswrapper[4745]: I0127 12:12:50.961758 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:50Z","lastTransitionTime":"2026-01-27T12:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.064963 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.065021 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.065041 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.065073 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.065094 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:51Z","lastTransitionTime":"2026-01-27T12:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.068232 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 02:20:05.182300099 +0000 UTC Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.073742 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:51 crc kubenswrapper[4745]: E0127 12:12:51.073937 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.074030 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.074094 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:51 crc kubenswrapper[4745]: E0127 12:12:51.074121 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.074227 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:51 crc kubenswrapper[4745]: E0127 12:12:51.074373 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:51 crc kubenswrapper[4745]: E0127 12:12:51.074505 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.167937 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.168018 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.168045 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.168079 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.168104 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:51Z","lastTransitionTime":"2026-01-27T12:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.271572 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.271630 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.271647 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.271675 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.271693 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:51Z","lastTransitionTime":"2026-01-27T12:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.375164 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.375223 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.375234 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.375250 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.375262 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:51Z","lastTransitionTime":"2026-01-27T12:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.478456 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.478541 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.478613 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.478655 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.478679 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:51Z","lastTransitionTime":"2026-01-27T12:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.581495 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.581573 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.581591 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.581621 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.581640 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:51Z","lastTransitionTime":"2026-01-27T12:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.684972 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.685019 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.685029 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.685046 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.685059 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:51Z","lastTransitionTime":"2026-01-27T12:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.787294 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.787356 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.787373 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.787388 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.787398 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:51Z","lastTransitionTime":"2026-01-27T12:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.890551 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.890605 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.890624 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.890645 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.890662 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:51Z","lastTransitionTime":"2026-01-27T12:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.993380 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.993433 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.993448 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.993472 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:51 crc kubenswrapper[4745]: I0127 12:12:51.993489 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:51Z","lastTransitionTime":"2026-01-27T12:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.069090 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 14:53:21.57420808 +0000 UTC Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.096256 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.096323 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.096342 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.096366 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.096385 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:52Z","lastTransitionTime":"2026-01-27T12:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.199217 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.199265 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.199276 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.199293 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.199305 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:52Z","lastTransitionTime":"2026-01-27T12:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.301850 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.301907 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.301917 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.301942 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.301955 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:52Z","lastTransitionTime":"2026-01-27T12:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.396310 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.396432 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.396487 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.396520 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.396540 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:52Z","lastTransitionTime":"2026-01-27T12:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:52 crc kubenswrapper[4745]: E0127 12:12:52.418930 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:52Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.424649 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.424741 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.424784 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.424843 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.424865 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:52Z","lastTransitionTime":"2026-01-27T12:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:52 crc kubenswrapper[4745]: E0127 12:12:52.447988 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:52Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.455218 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.455267 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.455275 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.455294 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.455305 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:52Z","lastTransitionTime":"2026-01-27T12:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:52 crc kubenswrapper[4745]: E0127 12:12:52.470429 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:52Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.484375 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.484639 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.484769 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.484890 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.484981 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:52Z","lastTransitionTime":"2026-01-27T12:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:52 crc kubenswrapper[4745]: E0127 12:12:52.497668 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:52Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.501103 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.501177 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.501192 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.501217 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.501231 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:52Z","lastTransitionTime":"2026-01-27T12:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:52 crc kubenswrapper[4745]: E0127 12:12:52.516029 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:52Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:52 crc kubenswrapper[4745]: E0127 12:12:52.516234 4745 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.518256 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.518289 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.518300 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.518320 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.518333 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:52Z","lastTransitionTime":"2026-01-27T12:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.620618 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.620663 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.620675 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.620699 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.620714 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:52Z","lastTransitionTime":"2026-01-27T12:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.723371 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.723414 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.723425 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.723441 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.723453 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:52Z","lastTransitionTime":"2026-01-27T12:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.825907 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.825969 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.825986 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.826009 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.826025 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:52Z","lastTransitionTime":"2026-01-27T12:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.929068 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.929161 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.929178 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.929203 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:52 crc kubenswrapper[4745]: I0127 12:12:52.929222 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:52Z","lastTransitionTime":"2026-01-27T12:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.032154 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.032208 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.032226 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.032251 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.032269 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:53Z","lastTransitionTime":"2026-01-27T12:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.069480 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 02:20:26.791146204 +0000 UTC Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.072851 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.073115 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:53 crc kubenswrapper[4745]: E0127 12:12:53.073111 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.073161 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:53 crc kubenswrapper[4745]: E0127 12:12:53.073976 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.073190 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:53 crc kubenswrapper[4745]: E0127 12:12:53.074079 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:53 crc kubenswrapper[4745]: E0127 12:12:53.073788 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.135850 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.135888 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.135897 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.135915 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.135926 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:53Z","lastTransitionTime":"2026-01-27T12:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.239401 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.239452 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.239471 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.239497 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.239513 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:53Z","lastTransitionTime":"2026-01-27T12:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.342450 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.342513 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.342535 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.342565 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.342587 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:53Z","lastTransitionTime":"2026-01-27T12:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.444858 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.444930 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.444954 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.444983 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.445004 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:53Z","lastTransitionTime":"2026-01-27T12:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.547986 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.548042 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.548060 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.548084 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.548101 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:53Z","lastTransitionTime":"2026-01-27T12:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.650386 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.650441 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.650458 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.650479 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.650498 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:53Z","lastTransitionTime":"2026-01-27T12:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.753588 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.753628 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.753636 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.753654 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.753662 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:53Z","lastTransitionTime":"2026-01-27T12:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.856449 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.856498 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.856510 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.856533 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.856548 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:53Z","lastTransitionTime":"2026-01-27T12:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.959438 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.959489 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.959504 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.959529 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:53 crc kubenswrapper[4745]: I0127 12:12:53.959571 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:53Z","lastTransitionTime":"2026-01-27T12:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.062455 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.062523 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.062542 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.062569 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.062591 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:54Z","lastTransitionTime":"2026-01-27T12:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.070739 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 19:08:35.806379089 +0000 UTC Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.165164 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.165235 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.165256 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.165287 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.165310 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:54Z","lastTransitionTime":"2026-01-27T12:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.268086 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.268166 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.268184 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.268208 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.268226 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:54Z","lastTransitionTime":"2026-01-27T12:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.370555 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.370605 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.370620 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.370640 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.370653 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:54Z","lastTransitionTime":"2026-01-27T12:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.473483 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.473530 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.473541 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.473555 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.473566 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:54Z","lastTransitionTime":"2026-01-27T12:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.576065 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.576140 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.576155 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.576173 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.576191 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:54Z","lastTransitionTime":"2026-01-27T12:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.678885 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.678919 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.678926 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.678939 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.678947 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:54Z","lastTransitionTime":"2026-01-27T12:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.781131 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.781167 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.781178 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.781192 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.781202 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:54Z","lastTransitionTime":"2026-01-27T12:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.884083 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.884118 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.884128 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.884144 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.884153 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:54Z","lastTransitionTime":"2026-01-27T12:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.986788 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.986881 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.986899 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.986922 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:54 crc kubenswrapper[4745]: I0127 12:12:54.986939 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:54Z","lastTransitionTime":"2026-01-27T12:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.071837 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 11:50:45.395108336 +0000 UTC Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.073048 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.073064 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:55 crc kubenswrapper[4745]: E0127 12:12:55.073163 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.073235 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.073288 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:55 crc kubenswrapper[4745]: E0127 12:12:55.073367 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:55 crc kubenswrapper[4745]: E0127 12:12:55.073495 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:12:55 crc kubenswrapper[4745]: E0127 12:12:55.073589 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.089188 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.089214 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.089222 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.089234 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.089243 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:55Z","lastTransitionTime":"2026-01-27T12:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.191375 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.191444 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.191461 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.191485 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.191502 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:55Z","lastTransitionTime":"2026-01-27T12:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.295579 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.295624 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.295640 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.295663 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.295682 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:55Z","lastTransitionTime":"2026-01-27T12:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.398609 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.398860 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.398929 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.398996 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.399063 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:55Z","lastTransitionTime":"2026-01-27T12:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.501798 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.502091 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.502157 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.502222 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.502291 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:55Z","lastTransitionTime":"2026-01-27T12:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.605396 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.605483 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.605501 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.605556 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.605574 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:55Z","lastTransitionTime":"2026-01-27T12:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.710429 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.710462 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.710471 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.710485 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.710495 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:55Z","lastTransitionTime":"2026-01-27T12:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.813122 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.813162 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.813171 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.813185 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.813194 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:55Z","lastTransitionTime":"2026-01-27T12:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.915327 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.915359 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.915368 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.915381 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:55 crc kubenswrapper[4745]: I0127 12:12:55.915389 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:55Z","lastTransitionTime":"2026-01-27T12:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.017663 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.017732 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.017751 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.017776 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.017794 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:56Z","lastTransitionTime":"2026-01-27T12:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.072028 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 07:18:19.810061999 +0000 UTC Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.120708 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.120773 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.120790 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.120857 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.120876 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:56Z","lastTransitionTime":"2026-01-27T12:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.223185 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.223240 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.223254 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.223273 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.223285 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:56Z","lastTransitionTime":"2026-01-27T12:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.326307 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.326348 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.326360 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.326378 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.326390 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:56Z","lastTransitionTime":"2026-01-27T12:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.428776 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.428842 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.428854 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.428871 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.428885 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:56Z","lastTransitionTime":"2026-01-27T12:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.531309 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.531348 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.531358 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.531376 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.531384 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:56Z","lastTransitionTime":"2026-01-27T12:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.633794 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.633840 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.633850 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.633863 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.633872 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:56Z","lastTransitionTime":"2026-01-27T12:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.735952 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.735991 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.736004 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.736018 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.736026 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:56Z","lastTransitionTime":"2026-01-27T12:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.838503 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.838535 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.838544 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.838559 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.838568 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:56Z","lastTransitionTime":"2026-01-27T12:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.941761 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.941798 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.941823 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.941843 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:56 crc kubenswrapper[4745]: I0127 12:12:56.941853 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:56Z","lastTransitionTime":"2026-01-27T12:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.045243 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.045638 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.045855 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.046091 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.046232 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:57Z","lastTransitionTime":"2026-01-27T12:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.072601 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 13:29:55.493573521 +0000 UTC Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.073174 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.073253 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.073253 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:57 crc kubenswrapper[4745]: E0127 12:12:57.073411 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.073607 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:57 crc kubenswrapper[4745]: E0127 12:12:57.073725 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:12:57 crc kubenswrapper[4745]: E0127 12:12:57.074147 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:57 crc kubenswrapper[4745]: E0127 12:12:57.074301 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.148580 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.148911 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.149021 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.149131 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.149231 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:57Z","lastTransitionTime":"2026-01-27T12:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.251496 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.251539 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.251549 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.251564 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.251576 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:57Z","lastTransitionTime":"2026-01-27T12:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.354351 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.354408 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.354421 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.354441 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.354454 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:57Z","lastTransitionTime":"2026-01-27T12:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.390099 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs\") pod \"network-metrics-daemon-swntl\" (UID: \"c1811fa8-9015-4fe0-8fad-2461d64cdffd\") " pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:57 crc kubenswrapper[4745]: E0127 12:12:57.390311 4745 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 12:12:57 crc kubenswrapper[4745]: E0127 12:12:57.390408 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs podName:c1811fa8-9015-4fe0-8fad-2461d64cdffd nodeName:}" failed. No retries permitted until 2026-01-27 12:13:29.390389736 +0000 UTC m=+102.195300434 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs") pod "network-metrics-daemon-swntl" (UID: "c1811fa8-9015-4fe0-8fad-2461d64cdffd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.456931 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.456980 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.456992 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.457012 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.457025 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:57Z","lastTransitionTime":"2026-01-27T12:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.559463 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.559754 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.559874 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.559995 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.560089 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:57Z","lastTransitionTime":"2026-01-27T12:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.661549 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.661592 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.661604 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.661619 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.661629 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:57Z","lastTransitionTime":"2026-01-27T12:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.764506 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.764575 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.764595 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.764620 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.764638 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:57Z","lastTransitionTime":"2026-01-27T12:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.867018 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.867112 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.867137 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.867166 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.867191 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:57Z","lastTransitionTime":"2026-01-27T12:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.969975 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.970037 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.970053 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.970076 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:57 crc kubenswrapper[4745]: I0127 12:12:57.970093 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:57Z","lastTransitionTime":"2026-01-27T12:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.072780 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 07:58:31.428163928 +0000 UTC Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.073090 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.073211 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.073269 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.073300 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.073360 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:58Z","lastTransitionTime":"2026-01-27T12:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.073968 4745 scope.go:117] "RemoveContainer" containerID="3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448" Jan 27 12:12:58 crc kubenswrapper[4745]: E0127 12:12:58.074154 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bnfh4_openshift-ovn-kubernetes(26b1987b-69bb-4768-a874-5a97b3327469)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" podUID="26b1987b-69bb-4768-a874-5a97b3327469" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.089235 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a075761afe1cb61a52a5a40380f0fd9eef24c6f1edc5e9cb4c1a70a6039fdce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:58Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.105745 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:58Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.115916 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:58Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.128801 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:58Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.146696 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:58Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.159219 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:58Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.170920 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:58Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.177272 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.177313 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.177324 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.177343 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.177355 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:58Z","lastTransitionTime":"2026-01-27T12:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.192145 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:41Z\\\",\\\"message\\\":\\\"nformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z]\\\\nI0127 12:12:41.101837 6422 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 12:12:41.101854 6422 model_client.go:382] Update o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bnfh4_openshift-ovn-kubernetes(26b1987b-69bb-4768-a874-5a97b3327469)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:58Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.205078 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:58Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.217080 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:58Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.229892 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:58Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.239902 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:58Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.250532 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd28c1e66e65534a92e21787b94ac2fc6ba53cfd7e70a50a54bea18ddfc3797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca350bc8a2f3ce6b6271662b78adfcbf42bdaea57a45d78132ab39d08c50562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:58Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.260522 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-swntl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1811fa8-9015-4fe0-8fad-2461d64cdffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-swntl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:58Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.272035 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49be802e-401f-41a8-aa5c-ae1d63523f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082334b7a9de2a2744faf97640bf056777c5f2da7cf5ff825121305a40dbf6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51afaee5ac37b7ae8bf400d42e8d2d6a3e12687ea9e080d7faa4e09dff5bf758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e1b927f3b132dddf8fc7a708f4eaff3596ea17106764af4369e50b0165a373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15cb80506071a3f2c69004a1281de26c5d8e3ab484c430453248f65e3c61b25f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cb80506071a3f2c69004a1281de26c5d8e3ab484c430453248f65e3c61b25f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:58Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.280087 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.280120 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.280128 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.280144 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.280154 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:58Z","lastTransitionTime":"2026-01-27T12:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.284563 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:58Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.293776 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:58Z is after 2025-08-24T17:21:41Z" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.382983 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.383428 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.383568 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.383734 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.383924 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:58Z","lastTransitionTime":"2026-01-27T12:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.486839 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.487076 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.487172 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.487239 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.487300 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:58Z","lastTransitionTime":"2026-01-27T12:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.590233 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.590276 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.590288 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.590302 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.590313 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:58Z","lastTransitionTime":"2026-01-27T12:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.692930 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.692978 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.692990 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.693008 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.693018 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:58Z","lastTransitionTime":"2026-01-27T12:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.795759 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.795821 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.795835 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.795851 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.795863 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:58Z","lastTransitionTime":"2026-01-27T12:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.897965 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.898008 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.898016 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.898030 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:58 crc kubenswrapper[4745]: I0127 12:12:58.898039 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:58Z","lastTransitionTime":"2026-01-27T12:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.000189 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.000222 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.000234 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.000249 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.000260 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:59Z","lastTransitionTime":"2026-01-27T12:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.072803 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.072885 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.072848 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.072940 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 14:39:42.63990871 +0000 UTC Jan 27 12:12:59 crc kubenswrapper[4745]: E0127 12:12:59.073011 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.072859 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:12:59 crc kubenswrapper[4745]: E0127 12:12:59.073399 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:12:59 crc kubenswrapper[4745]: E0127 12:12:59.073564 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:12:59 crc kubenswrapper[4745]: E0127 12:12:59.073700 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.102943 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.102996 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.103008 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.103027 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.103045 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:59Z","lastTransitionTime":"2026-01-27T12:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.205867 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.205913 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.205929 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.205948 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.205963 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:59Z","lastTransitionTime":"2026-01-27T12:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.308337 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.308388 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.308402 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.308421 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.308434 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:59Z","lastTransitionTime":"2026-01-27T12:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.411299 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.411342 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.411353 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.411369 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.411381 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:59Z","lastTransitionTime":"2026-01-27T12:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.513608 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.513651 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.513664 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.513680 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.513695 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:59Z","lastTransitionTime":"2026-01-27T12:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.616171 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.616221 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.616232 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.616250 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.616260 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:59Z","lastTransitionTime":"2026-01-27T12:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.749501 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.749560 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.749573 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.749591 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.749616 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:59Z","lastTransitionTime":"2026-01-27T12:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.852149 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.852221 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.852235 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.852253 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.852264 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:59Z","lastTransitionTime":"2026-01-27T12:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.954668 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.954729 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.954741 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.954763 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:12:59 crc kubenswrapper[4745]: I0127 12:12:59.954778 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:12:59Z","lastTransitionTime":"2026-01-27T12:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.057684 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.057725 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.057734 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.057749 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.057758 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:00Z","lastTransitionTime":"2026-01-27T12:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.073238 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 05:34:26.182421161 +0000 UTC Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.160081 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.160124 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.160136 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.160154 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.160167 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:00Z","lastTransitionTime":"2026-01-27T12:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.262743 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.262782 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.262791 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.262805 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.262827 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:00Z","lastTransitionTime":"2026-01-27T12:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.366350 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.366443 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.366456 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.366472 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.366486 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:00Z","lastTransitionTime":"2026-01-27T12:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.470127 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.470164 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.470172 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.470186 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.470198 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:00Z","lastTransitionTime":"2026-01-27T12:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.572349 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.572383 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.572390 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.572407 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.572415 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:00Z","lastTransitionTime":"2026-01-27T12:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.675233 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.675269 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.675278 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.675291 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.675302 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:00Z","lastTransitionTime":"2026-01-27T12:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.778879 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.778919 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.778928 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.778946 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.778956 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:00Z","lastTransitionTime":"2026-01-27T12:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.883077 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.883135 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.883168 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.883190 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.883207 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:00Z","lastTransitionTime":"2026-01-27T12:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.985590 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.985645 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.985658 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.985679 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:00 crc kubenswrapper[4745]: I0127 12:13:00.985690 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:00Z","lastTransitionTime":"2026-01-27T12:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.072915 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:01 crc kubenswrapper[4745]: E0127 12:13:01.073049 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.073215 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:01 crc kubenswrapper[4745]: E0127 12:13:01.073261 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.073378 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:01 crc kubenswrapper[4745]: E0127 12:13:01.073423 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.073522 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:01 crc kubenswrapper[4745]: E0127 12:13:01.073567 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.073695 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 04:37:54.723641298 +0000 UTC Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.088551 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.088590 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.088599 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.088614 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.088623 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:01Z","lastTransitionTime":"2026-01-27T12:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.191264 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.191302 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.191312 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.191329 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.191341 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:01Z","lastTransitionTime":"2026-01-27T12:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.293908 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.293968 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.293987 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.294012 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.294030 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:01Z","lastTransitionTime":"2026-01-27T12:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.396858 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.396933 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.396948 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.396966 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.396980 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:01Z","lastTransitionTime":"2026-01-27T12:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.499528 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.499775 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.499892 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.500022 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.500128 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:01Z","lastTransitionTime":"2026-01-27T12:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.602679 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.602723 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.602733 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.602747 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.602757 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:01Z","lastTransitionTime":"2026-01-27T12:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.704501 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.704540 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.704551 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.704566 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.704576 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:01Z","lastTransitionTime":"2026-01-27T12:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.806713 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.806789 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.806846 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.806897 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.806921 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:01Z","lastTransitionTime":"2026-01-27T12:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.909208 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.909251 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.909264 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.909279 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:01 crc kubenswrapper[4745]: I0127 12:13:01.909291 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:01Z","lastTransitionTime":"2026-01-27T12:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.011395 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.011445 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.011456 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.011472 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.011484 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:02Z","lastTransitionTime":"2026-01-27T12:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.074671 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 20:09:28.971369096 +0000 UTC Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.114251 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.114335 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.114373 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.114411 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.114439 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:02Z","lastTransitionTime":"2026-01-27T12:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.216884 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.216977 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.217005 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.217035 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.217058 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:02Z","lastTransitionTime":"2026-01-27T12:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.320171 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.320218 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.320230 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.320248 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.320260 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:02Z","lastTransitionTime":"2026-01-27T12:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.422468 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.422508 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.422518 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.422533 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.422544 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:02Z","lastTransitionTime":"2026-01-27T12:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.524860 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.524901 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.524913 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.524929 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.524941 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:02Z","lastTransitionTime":"2026-01-27T12:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.626826 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.626864 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.626874 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.626888 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.626898 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:02Z","lastTransitionTime":"2026-01-27T12:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.666849 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-97hlh_c438e876-f4c1-42ca-b935-b5e58be9cfb2/kube-multus/0.log" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.666892 4745 generic.go:334] "Generic (PLEG): container finished" podID="c438e876-f4c1-42ca-b935-b5e58be9cfb2" containerID="910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966" exitCode=1 Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.666923 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-97hlh" event={"ID":"c438e876-f4c1-42ca-b935-b5e58be9cfb2","Type":"ContainerDied","Data":"910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966"} Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.667360 4745 scope.go:117] "RemoveContainer" containerID="910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.682877 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.699006 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a075761afe1cb61a52a5a40380f0fd9eef24c6f1edc5e9cb4c1a70a6039fdce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.712834 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.726037 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.729873 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.729941 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.729958 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.729977 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.729993 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:02Z","lastTransitionTime":"2026-01-27T12:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.737466 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.750237 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.773555 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:41Z\\\",\\\"message\\\":\\\"nformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z]\\\\nI0127 12:12:41.101837 6422 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 12:12:41.101854 6422 model_client.go:382] Update o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bnfh4_openshift-ovn-kubernetes(26b1987b-69bb-4768-a874-5a97b3327469)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.788683 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.804747 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"2026-01-27T12:12:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b76ff76f-f2e7-4ef4-a337-33f2955f9664\\\\n2026-01-27T12:12:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b76ff76f-f2e7-4ef4-a337-33f2955f9664 to /host/opt/cni/bin/\\\\n2026-01-27T12:12:16Z [verbose] multus-daemon started\\\\n2026-01-27T12:12:16Z [verbose] Readiness Indicator file check\\\\n2026-01-27T12:13:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.817685 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.826471 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.826528 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.826554 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.826583 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.826607 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:02Z","lastTransitionTime":"2026-01-27T12:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.827697 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd28c1e66e65534a92e21787b94ac2fc6ba53cfd7e70a50a54bea18ddfc3797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca350bc8a2f3ce6b6271662b78adfcbf42bdaea57a45d78132ab39d08c50562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: E0127 12:13:02.840557 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.842687 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-swntl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1811fa8-9015-4fe0-8fad-2461d64cdffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-swntl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.845169 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.845201 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.845209 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.845223 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.845233 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:02Z","lastTransitionTime":"2026-01-27T12:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.856224 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: E0127 12:13:02.860668 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.864232 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.864471 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.864590 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.864722 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.864841 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:02Z","lastTransitionTime":"2026-01-27T12:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.870013 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: E0127 12:13:02.882070 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.884214 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.886897 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.887058 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.887170 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.887253 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.887333 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:02Z","lastTransitionTime":"2026-01-27T12:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.897176 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: E0127 12:13:02.900458 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.903704 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.903741 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.903751 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.903764 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.903773 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:02Z","lastTransitionTime":"2026-01-27T12:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.909182 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49be802e-401f-41a8-aa5c-ae1d63523f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082334b7a9de2a2744faf97640bf056777c5f2da7cf5ff825121305a40dbf6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51afaee5ac37b7ae8bf400d42e8d2d6a3e12687ea9e080d7faa4e09dff5bf758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e1b927f3b132dddf8fc7a708f4eaff3596ea17106764af4369e50b0165a373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15cb80506071a3f2c69004a1281de26c5d8e3ab484c430453248f65e3c61b25f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cb80506071a3f2c69004a1281de26c5d8e3ab484c430453248f65e3c61b25f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: E0127 12:13:02.915886 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:02Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:02 crc kubenswrapper[4745]: E0127 12:13:02.916365 4745 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.917796 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.917925 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.918007 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.918079 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:02 crc kubenswrapper[4745]: I0127 12:13:02.918144 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:02Z","lastTransitionTime":"2026-01-27T12:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.019884 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.019927 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.019936 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.019951 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.019960 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:03Z","lastTransitionTime":"2026-01-27T12:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.073631 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.073640 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.073659 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.073667 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:03 crc kubenswrapper[4745]: E0127 12:13:03.073894 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:03 crc kubenswrapper[4745]: E0127 12:13:03.073961 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:03 crc kubenswrapper[4745]: E0127 12:13:03.074065 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:03 crc kubenswrapper[4745]: E0127 12:13:03.074154 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.074753 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 16:01:56.095010931 +0000 UTC Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.122890 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.122933 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.122947 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.122964 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.122975 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:03Z","lastTransitionTime":"2026-01-27T12:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.225035 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.225217 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.225280 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.225349 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.225408 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:03Z","lastTransitionTime":"2026-01-27T12:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.327515 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.327550 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.327560 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.327574 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.327586 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:03Z","lastTransitionTime":"2026-01-27T12:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.429396 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.429451 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.429461 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.429475 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.429485 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:03Z","lastTransitionTime":"2026-01-27T12:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.532045 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.532083 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.532096 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.532112 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.532123 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:03Z","lastTransitionTime":"2026-01-27T12:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.634659 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.635009 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.635104 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.635191 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.635293 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:03Z","lastTransitionTime":"2026-01-27T12:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.670876 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-97hlh_c438e876-f4c1-42ca-b935-b5e58be9cfb2/kube-multus/0.log" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.671114 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-97hlh" event={"ID":"c438e876-f4c1-42ca-b935-b5e58be9cfb2","Type":"ContainerStarted","Data":"baa0c592826e1a106ac51a6142fa5d252f837bafe03f4cdc94832dbb4704e9f3"} Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.690887 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a075761afe1cb61a52a5a40380f0fd9eef24c6f1edc5e9cb4c1a70a6039fdce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:03Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.704015 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:03Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.715032 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:03Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.732036 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:03Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.737164 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.737198 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.737206 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.737221 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.737231 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:03Z","lastTransitionTime":"2026-01-27T12:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.749902 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:41Z\\\",\\\"message\\\":\\\"nformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z]\\\\nI0127 12:12:41.101837 6422 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 12:12:41.101854 6422 model_client.go:382] Update o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bnfh4_openshift-ovn-kubernetes(26b1987b-69bb-4768-a874-5a97b3327469)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:03Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.764458 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:03Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.776153 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:03Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.789562 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:03Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.802562 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd28c1e66e65534a92e21787b94ac2fc6ba53cfd7e70a50a54bea18ddfc3797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca350bc8a2f3ce6b6271662b78adfcbf42bdaea57a45d78132ab39d08c50562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:03Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.813923 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-swntl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1811fa8-9015-4fe0-8fad-2461d64cdffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-swntl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:03Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.827948 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:03Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.839347 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.839504 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.839823 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.839968 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.840081 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:03Z","lastTransitionTime":"2026-01-27T12:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.841599 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:03Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.856880 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa0c592826e1a106ac51a6142fa5d252f837bafe03f4cdc94832dbb4704e9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"2026-01-27T12:12:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b76ff76f-f2e7-4ef4-a337-33f2955f9664\\\\n2026-01-27T12:12:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b76ff76f-f2e7-4ef4-a337-33f2955f9664 to /host/opt/cni/bin/\\\\n2026-01-27T12:12:16Z [verbose] multus-daemon started\\\\n2026-01-27T12:12:16Z [verbose] Readiness Indicator file check\\\\n2026-01-27T12:13:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:13:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:03Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.867718 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:03Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.879796 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49be802e-401f-41a8-aa5c-ae1d63523f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082334b7a9de2a2744faf97640bf056777c5f2da7cf5ff825121305a40dbf6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51afaee5ac37b7ae8bf400d42e8d2d6a3e12687ea9e080d7faa4e09dff5bf758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e1b927f3b132dddf8fc7a708f4eaff3596ea17106764af4369e50b0165a373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15cb80506071a3f2c69004a1281de26c5d8e3ab484c430453248f65e3c61b25f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cb80506071a3f2c69004a1281de26c5d8e3ab484c430453248f65e3c61b25f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:03Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.890588 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:03Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.898674 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:03Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.942613 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.942660 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.942669 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.942686 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:03 crc kubenswrapper[4745]: I0127 12:13:03.942696 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:03Z","lastTransitionTime":"2026-01-27T12:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.045102 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.045174 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.045190 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.045209 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.045222 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:04Z","lastTransitionTime":"2026-01-27T12:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.075798 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 00:27:33.046909122 +0000 UTC Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.147852 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.147906 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.147918 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.147936 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.147949 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:04Z","lastTransitionTime":"2026-01-27T12:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.250940 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.251019 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.251048 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.251081 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.251104 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:04Z","lastTransitionTime":"2026-01-27T12:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.354281 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.354316 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.354330 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.354346 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.354358 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:04Z","lastTransitionTime":"2026-01-27T12:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.456589 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.456642 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.456656 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.456676 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.456695 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:04Z","lastTransitionTime":"2026-01-27T12:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.559544 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.559576 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.559588 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.559603 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.559613 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:04Z","lastTransitionTime":"2026-01-27T12:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.662231 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.662278 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.662291 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.662311 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.662323 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:04Z","lastTransitionTime":"2026-01-27T12:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.764748 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.764839 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.764858 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.764893 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.764917 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:04Z","lastTransitionTime":"2026-01-27T12:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.867784 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.867864 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.867883 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.867903 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.867916 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:04Z","lastTransitionTime":"2026-01-27T12:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.971467 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.971506 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.971515 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.971546 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:04 crc kubenswrapper[4745]: I0127 12:13:04.971556 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:04Z","lastTransitionTime":"2026-01-27T12:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.073065 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.073112 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:05 crc kubenswrapper[4745]: E0127 12:13:05.073194 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:05 crc kubenswrapper[4745]: E0127 12:13:05.073313 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.073430 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.073466 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:05 crc kubenswrapper[4745]: E0127 12:13:05.073473 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:05 crc kubenswrapper[4745]: E0127 12:13:05.073606 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.074675 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.074704 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.074714 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.074727 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.074736 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:05Z","lastTransitionTime":"2026-01-27T12:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.076000 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:56:03.394149916 +0000 UTC Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.178204 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.178295 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.178318 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.178350 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.178373 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:05Z","lastTransitionTime":"2026-01-27T12:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.281879 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.282000 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.282022 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.282046 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.282064 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:05Z","lastTransitionTime":"2026-01-27T12:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.384900 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.384936 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.384944 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.384961 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.384976 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:05Z","lastTransitionTime":"2026-01-27T12:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.487366 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.487401 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.487413 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.487430 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.487442 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:05Z","lastTransitionTime":"2026-01-27T12:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.590574 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.590643 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.590663 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.590689 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.590709 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:05Z","lastTransitionTime":"2026-01-27T12:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.692871 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.692956 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.692978 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.693004 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.693021 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:05Z","lastTransitionTime":"2026-01-27T12:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.795966 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.796031 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.796048 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.796073 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.796090 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:05Z","lastTransitionTime":"2026-01-27T12:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.898721 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.898776 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.898793 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.898838 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:05 crc kubenswrapper[4745]: I0127 12:13:05.898857 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:05Z","lastTransitionTime":"2026-01-27T12:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.001384 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.001434 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.001448 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.001470 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.001506 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:06Z","lastTransitionTime":"2026-01-27T12:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.076303 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 23:09:03.024208623 +0000 UTC Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.104264 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.104451 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.104541 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.104634 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.104717 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:06Z","lastTransitionTime":"2026-01-27T12:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.206947 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.207015 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.207032 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.207057 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.207076 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:06Z","lastTransitionTime":"2026-01-27T12:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.309705 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.309979 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.310058 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.310133 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.310201 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:06Z","lastTransitionTime":"2026-01-27T12:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.413154 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.413216 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.413233 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.413258 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.413275 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:06Z","lastTransitionTime":"2026-01-27T12:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.517037 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.517104 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.517120 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.517141 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.517159 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:06Z","lastTransitionTime":"2026-01-27T12:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.620787 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.620869 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.620886 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.620909 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.620924 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:06Z","lastTransitionTime":"2026-01-27T12:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.722985 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.723041 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.723051 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.723066 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.723078 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:06Z","lastTransitionTime":"2026-01-27T12:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.826063 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.826134 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.826153 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.826174 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.826189 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:06Z","lastTransitionTime":"2026-01-27T12:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.930299 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.930359 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.930377 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.930400 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:06 crc kubenswrapper[4745]: I0127 12:13:06.930417 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:06Z","lastTransitionTime":"2026-01-27T12:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.034358 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.034428 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.034446 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.034469 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.034486 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:07Z","lastTransitionTime":"2026-01-27T12:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.073439 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:07 crc kubenswrapper[4745]: E0127 12:13:07.073643 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.073971 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:07 crc kubenswrapper[4745]: E0127 12:13:07.074085 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.074316 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:07 crc kubenswrapper[4745]: E0127 12:13:07.074418 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.074624 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:07 crc kubenswrapper[4745]: E0127 12:13:07.074722 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.080047 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 21:26:11.661082239 +0000 UTC Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.137695 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.137775 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.137802 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.137879 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.137899 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:07Z","lastTransitionTime":"2026-01-27T12:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.240861 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.240898 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.240914 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.240930 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.240941 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:07Z","lastTransitionTime":"2026-01-27T12:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.342707 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.342750 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.342758 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.342771 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.342779 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:07Z","lastTransitionTime":"2026-01-27T12:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.445541 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.445592 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.445608 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.445627 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.445641 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:07Z","lastTransitionTime":"2026-01-27T12:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.548115 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.548182 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.548194 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.548209 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.548222 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:07Z","lastTransitionTime":"2026-01-27T12:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.650260 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.650299 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.650310 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.650325 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.650336 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:07Z","lastTransitionTime":"2026-01-27T12:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.751955 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.752003 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.752029 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.752049 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.752068 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:07Z","lastTransitionTime":"2026-01-27T12:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.854464 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.854504 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.854514 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.854528 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.854537 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:07Z","lastTransitionTime":"2026-01-27T12:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.957252 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.957319 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.957331 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.957352 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:07 crc kubenswrapper[4745]: I0127 12:13:07.957364 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:07Z","lastTransitionTime":"2026-01-27T12:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.060592 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.060678 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.060696 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.060722 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.060739 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:08Z","lastTransitionTime":"2026-01-27T12:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.080603 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:12:42.921988641 +0000 UTC Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.092137 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:08Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.110672 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:08Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.127618 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:08Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.147222 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:08Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.164050 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.164092 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.164104 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.164120 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.164132 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:08Z","lastTransitionTime":"2026-01-27T12:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.168871 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:41Z\\\",\\\"message\\\":\\\"nformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z]\\\\nI0127 12:12:41.101837 6422 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 12:12:41.101854 6422 model_client.go:382] Update o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bnfh4_openshift-ovn-kubernetes(26b1987b-69bb-4768-a874-5a97b3327469)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:08Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.183284 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:08Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.196193 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa0c592826e1a106ac51a6142fa5d252f837bafe03f4cdc94832dbb4704e9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"2026-01-27T12:12:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b76ff76f-f2e7-4ef4-a337-33f2955f9664\\\\n2026-01-27T12:12:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b76ff76f-f2e7-4ef4-a337-33f2955f9664 to /host/opt/cni/bin/\\\\n2026-01-27T12:12:16Z [verbose] multus-daemon started\\\\n2026-01-27T12:12:16Z [verbose] Readiness Indicator file check\\\\n2026-01-27T12:13:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:13:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:08Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.210130 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:08Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.221072 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd28c1e66e65534a92e21787b94ac2fc6ba53cfd7e70a50a54bea18ddfc3797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca350bc8a2f3ce6b6271662b78adfcbf42bdaea57a45d78132ab39d08c50562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:08Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.233062 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-swntl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1811fa8-9015-4fe0-8fad-2461d64cdffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-swntl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:08Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.245964 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:08Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.260040 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:08Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.267283 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.267336 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.267349 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.267367 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.267379 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:08Z","lastTransitionTime":"2026-01-27T12:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.270574 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:08Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.283053 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49be802e-401f-41a8-aa5c-ae1d63523f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082334b7a9de2a2744faf97640bf056777c5f2da7cf5ff825121305a40dbf6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51afaee5ac37b7ae8bf400d42e8d2d6a3e12687ea9e080d7faa4e09dff5bf758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e1b927f3b132dddf8fc7a708f4eaff3596ea17106764af4369e50b0165a373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15cb80506071a3f2c69004a1281de26c5d8e3ab484c430453248f65e3c61b25f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cb80506071a3f2c69004a1281de26c5d8e3ab484c430453248f65e3c61b25f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:08Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.295421 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:08Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.304501 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:08Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.316479 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a075761afe1cb61a52a5a40380f0fd9eef24c6f1edc5e9cb4c1a70a6039fdce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:08Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.369416 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.369459 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.369467 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.369482 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.369491 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:08Z","lastTransitionTime":"2026-01-27T12:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.472221 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.472284 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.472304 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.472330 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.472351 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:08Z","lastTransitionTime":"2026-01-27T12:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.575434 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.575505 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.575523 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.575547 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.575565 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:08Z","lastTransitionTime":"2026-01-27T12:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.681497 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.681590 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.681610 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.681641 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.681659 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:08Z","lastTransitionTime":"2026-01-27T12:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.783730 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.783865 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.783880 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.783897 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.783937 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:08Z","lastTransitionTime":"2026-01-27T12:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.890006 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.890060 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.890076 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.890098 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.890115 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:08Z","lastTransitionTime":"2026-01-27T12:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.993504 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.993582 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.993600 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.993625 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:08 crc kubenswrapper[4745]: I0127 12:13:08.993643 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:08Z","lastTransitionTime":"2026-01-27T12:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.073331 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:09 crc kubenswrapper[4745]: E0127 12:13:09.073474 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.073521 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.073530 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.073602 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:09 crc kubenswrapper[4745]: E0127 12:13:09.073792 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:09 crc kubenswrapper[4745]: E0127 12:13:09.073900 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:09 crc kubenswrapper[4745]: E0127 12:13:09.073967 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.081117 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:41:17.348219393 +0000 UTC Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.096299 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.096358 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.096376 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.096399 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.096416 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:09Z","lastTransitionTime":"2026-01-27T12:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.199669 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.199703 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.199712 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.199727 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.199742 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:09Z","lastTransitionTime":"2026-01-27T12:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.302886 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.302922 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.302932 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.302947 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.302956 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:09Z","lastTransitionTime":"2026-01-27T12:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.405339 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.405370 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.405378 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.405390 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.405400 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:09Z","lastTransitionTime":"2026-01-27T12:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.507881 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.507959 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.507970 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.507984 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.507993 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:09Z","lastTransitionTime":"2026-01-27T12:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.611076 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.611122 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.611131 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.611146 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.611155 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:09Z","lastTransitionTime":"2026-01-27T12:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.713700 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.713763 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.713780 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.713846 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.713881 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:09Z","lastTransitionTime":"2026-01-27T12:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.816908 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.816971 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.816988 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.817015 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.817032 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:09Z","lastTransitionTime":"2026-01-27T12:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.919727 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.919794 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.919848 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.919880 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:09 crc kubenswrapper[4745]: I0127 12:13:09.919902 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:09Z","lastTransitionTime":"2026-01-27T12:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.022486 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.022549 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.022567 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.022591 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.022608 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:10Z","lastTransitionTime":"2026-01-27T12:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.075007 4745 scope.go:117] "RemoveContainer" containerID="3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.081387 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 22:36:39.422541218 +0000 UTC Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.124899 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.124933 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.124943 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.124959 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.124972 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:10Z","lastTransitionTime":"2026-01-27T12:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.227010 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.227044 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.227055 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.227070 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.227080 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:10Z","lastTransitionTime":"2026-01-27T12:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.329693 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.329720 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.329728 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.329740 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.329750 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:10Z","lastTransitionTime":"2026-01-27T12:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.431392 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.431417 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.431425 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.431437 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.431447 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:10Z","lastTransitionTime":"2026-01-27T12:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.533587 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.533625 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.533633 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.533647 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.533656 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:10Z","lastTransitionTime":"2026-01-27T12:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.635478 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.635522 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.635534 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.635549 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.635562 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:10Z","lastTransitionTime":"2026-01-27T12:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.699935 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnfh4_26b1987b-69bb-4768-a874-5a97b3327469/ovnkube-controller/2.log" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.702711 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerStarted","Data":"17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627"} Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.703157 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.716489 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a075761afe1cb61a52a5a40380f0fd9eef24c6f1edc5e9cb4c1a70a6039fdce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:10Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.727326 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:10Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.737352 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.737378 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.737386 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.737400 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.737411 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:10Z","lastTransitionTime":"2026-01-27T12:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.740320 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:10Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.754112 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:10Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.764861 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:10Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.778127 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:10Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.795021 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:10Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.814418 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:41Z\\\",\\\"message\\\":\\\"nformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z]\\\\nI0127 12:12:41.101837 6422 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 12:12:41.101854 6422 model_client.go:382] Update o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:13:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:10Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.831157 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:10Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.839511 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.839560 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.839570 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.839587 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.839598 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:10Z","lastTransitionTime":"2026-01-27T12:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.847093 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:10Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.860360 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa0c592826e1a106ac51a6142fa5d252f837bafe03f4cdc94832dbb4704e9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"2026-01-27T12:12:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b76ff76f-f2e7-4ef4-a337-33f2955f9664\\\\n2026-01-27T12:12:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b76ff76f-f2e7-4ef4-a337-33f2955f9664 to /host/opt/cni/bin/\\\\n2026-01-27T12:12:16Z [verbose] multus-daemon started\\\\n2026-01-27T12:12:16Z [verbose] Readiness Indicator file check\\\\n2026-01-27T12:13:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:13:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:10Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.871566 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:10Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.882317 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd28c1e66e65534a92e21787b94ac2fc6ba53cfd7e70a50a54bea18ddfc3797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca350bc8a2f3ce6b6271662b78adfcbf42bdaea57a45d78132ab39d08c50562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:10Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.893493 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-swntl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1811fa8-9015-4fe0-8fad-2461d64cdffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-swntl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:10Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.904553 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49be802e-401f-41a8-aa5c-ae1d63523f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082334b7a9de2a2744faf97640bf056777c5f2da7cf5ff825121305a40dbf6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51afaee5ac37b7ae8bf400d42e8d2d6a3e12687ea9e080d7faa4e09dff5bf758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e1b927f3b132dddf8fc7a708f4eaff3596ea17106764af4369e50b0165a373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15cb80506071a3f2c69004a1281de26c5d8e3ab484c430453248f65e3c61b25f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cb80506071a3f2c69004a1281de26c5d8e3ab484c430453248f65e3c61b25f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:10Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.917172 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:10Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.926792 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:10Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.941471 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.941523 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.941535 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.941549 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:10 crc kubenswrapper[4745]: I0127 12:13:10.941561 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:10Z","lastTransitionTime":"2026-01-27T12:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.044610 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.044658 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.044673 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.044693 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.044708 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:11Z","lastTransitionTime":"2026-01-27T12:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.073392 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.073474 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.073415 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:11 crc kubenswrapper[4745]: E0127 12:13:11.073534 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.073561 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:11 crc kubenswrapper[4745]: E0127 12:13:11.073610 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:11 crc kubenswrapper[4745]: E0127 12:13:11.073691 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:11 crc kubenswrapper[4745]: E0127 12:13:11.073908 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.081455 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 15:02:17.003075302 +0000 UTC Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.147254 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.147344 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.147363 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.147382 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.147394 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:11Z","lastTransitionTime":"2026-01-27T12:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.249385 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.249457 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.249484 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.249500 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.249509 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:11Z","lastTransitionTime":"2026-01-27T12:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.351302 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.351340 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.351351 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.351364 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.351373 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:11Z","lastTransitionTime":"2026-01-27T12:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.454358 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.454399 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.454408 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.454424 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.454434 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:11Z","lastTransitionTime":"2026-01-27T12:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.562801 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.562869 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.562881 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.562902 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.562913 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:11Z","lastTransitionTime":"2026-01-27T12:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.665056 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.665098 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.665109 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.665124 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.665135 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:11Z","lastTransitionTime":"2026-01-27T12:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.708996 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnfh4_26b1987b-69bb-4768-a874-5a97b3327469/ovnkube-controller/3.log" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.709663 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnfh4_26b1987b-69bb-4768-a874-5a97b3327469/ovnkube-controller/2.log" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.712072 4745 generic.go:334] "Generic (PLEG): container finished" podID="26b1987b-69bb-4768-a874-5a97b3327469" containerID="17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627" exitCode=1 Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.712117 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerDied","Data":"17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627"} Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.712166 4745 scope.go:117] "RemoveContainer" containerID="3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.712756 4745 scope.go:117] "RemoveContainer" containerID="17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627" Jan 27 12:13:11 crc kubenswrapper[4745]: E0127 12:13:11.713059 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bnfh4_openshift-ovn-kubernetes(26b1987b-69bb-4768-a874-5a97b3327469)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" podUID="26b1987b-69bb-4768-a874-5a97b3327469" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.730857 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49be802e-401f-41a8-aa5c-ae1d63523f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082334b7a9de2a2744faf97640bf056777c5f2da7cf5ff825121305a40dbf6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51afaee5ac37b7ae8bf400d42e8d2d6a3e12687ea9e080d7faa4e09dff5bf758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e1b927f3b132dddf8fc7a708f4eaff3596ea17106764af4369e50b0165a373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15cb80506071a3f2c69004a1281de26c5d8e3ab484c430453248f65e3c61b25f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cb80506071a3f2c69004a1281de26c5d8e3ab484c430453248f65e3c61b25f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.743276 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.753225 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.764461 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a075761afe1cb61a52a5a40380f0fd9eef24c6f1edc5e9cb4c1a70a6039fdce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.766964 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.767017 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.767033 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.767054 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.767070 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:11Z","lastTransitionTime":"2026-01-27T12:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.774249 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.782028 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.791703 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.801443 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.812076 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.824834 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.855965 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d6558936d77cc65d9edfd96292f88d201f206f8573847aedc0e5c43ee2cb448\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:12:41Z\\\",\\\"message\\\":\\\"nformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:12:41Z is after 2025-08-24T17:21:41Z]\\\\nI0127 12:12:41.101837 6422 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 12:12:41.101854 6422 model_client.go:382] Update o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:13:10Z\\\",\\\"message\\\":\\\"e.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:10Z is after 2025-08-24T17:21:41Z]\\\\nI0127 12:13:10.788572 6868 services_controller.go:444] Built service openshift-machine-config-operator/machine-config-operator LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 12:13:10.788587 6868 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI0127 12:13:10.788600 6868 services_controller.go:451] Built service openshift-machine-config-operator/machine-config-operator cluster-wide LB for network=d\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:13:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.868901 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.868940 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.868954 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.868970 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.868982 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:11Z","lastTransitionTime":"2026-01-27T12:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.881497 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.895106 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.906913 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa0c592826e1a106ac51a6142fa5d252f837bafe03f4cdc94832dbb4704e9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"2026-01-27T12:12:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b76ff76f-f2e7-4ef4-a337-33f2955f9664\\\\n2026-01-27T12:12:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b76ff76f-f2e7-4ef4-a337-33f2955f9664 to /host/opt/cni/bin/\\\\n2026-01-27T12:12:16Z [verbose] multus-daemon started\\\\n2026-01-27T12:12:16Z [verbose] Readiness Indicator file check\\\\n2026-01-27T12:13:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:13:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.919197 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.930431 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd28c1e66e65534a92e21787b94ac2fc6ba53cfd7e70a50a54bea18ddfc3797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca350bc8a2f3ce6b6271662b78adfcbf42bdaea57a45d78132ab39d08c50562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.939890 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-swntl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1811fa8-9015-4fe0-8fad-2461d64cdffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-swntl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.971510 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.971549 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.971557 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.971572 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:11 crc kubenswrapper[4745]: I0127 12:13:11.971580 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:11Z","lastTransitionTime":"2026-01-27T12:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.074018 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.074051 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.074059 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.074069 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.074077 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:12Z","lastTransitionTime":"2026-01-27T12:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.082187 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 21:20:35.669084258 +0000 UTC Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.175941 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.175982 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.175991 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.176005 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.176013 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:12Z","lastTransitionTime":"2026-01-27T12:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.278618 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.278655 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.278663 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.278695 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.278709 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:12Z","lastTransitionTime":"2026-01-27T12:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.380872 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.380910 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.380918 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.380933 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.380942 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:12Z","lastTransitionTime":"2026-01-27T12:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.483158 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.483212 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.483225 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.483242 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.483254 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:12Z","lastTransitionTime":"2026-01-27T12:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.584888 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.584929 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.584941 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.584959 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.584970 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:12Z","lastTransitionTime":"2026-01-27T12:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.687134 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.687180 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.687192 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.687213 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.687224 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:12Z","lastTransitionTime":"2026-01-27T12:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.716039 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnfh4_26b1987b-69bb-4768-a874-5a97b3327469/ovnkube-controller/3.log" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.719108 4745 scope.go:117] "RemoveContainer" containerID="17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627" Jan 27 12:13:12 crc kubenswrapper[4745]: E0127 12:13:12.719296 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bnfh4_openshift-ovn-kubernetes(26b1987b-69bb-4768-a874-5a97b3327469)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" podUID="26b1987b-69bb-4768-a874-5a97b3327469" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.732943 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2395cd-8fe1-4433-8232-f4ea4d00cb1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e747172df1f1f7cf74849b9a05561833b782f8cbea521d3c4c5148175e6adf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb746d2b0aeb4e366f82e2c08e21be3bc83f376685e05889b577a5b7ec07a353\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06fff75032a2a2a727779d88f20497c4c6facceeb3a3280dd0c1b656799504f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.742190 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4x9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98fd8161-ba85-49ff-bbae-48dd3925f0e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0b87e1b3dc77ad30f78e544d0fc7f359f14e9c6422969c93c0f9a7415c9a7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrz6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4x9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.751716 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49be802e-401f-41a8-aa5c-ae1d63523f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082334b7a9de2a2744faf97640bf056777c5f2da7cf5ff825121305a40dbf6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51afaee5ac37b7ae8bf400d42e8d2d6a3e12687ea9e080d7faa4e09dff5bf758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e1b927f3b132dddf8fc7a708f4eaff3596ea17106764af4369e50b0165a373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15cb80506071a3f2c69004a1281de26c5d8e3ab484c430453248f65e3c61b25f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cb80506071a3f2c69004a1281de26c5d8e3ab484c430453248f65e3c61b25f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.761270 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5d8gm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"685faeae-b2b7-47a3-8da8-7fe8b2a725a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be033ef85475976746ef7233922cd8f5ff85a947b1fe08daf80004b0ea0dc303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5d8gm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.775037 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4c7cda7-14d9-4e22-82b9-f36bda68c36a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a075761afe1cb61a52a5a40380f0fd9eef24c6f1edc5e9cb4c1a70a6039fdce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 12:12:10.530335 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 12:12:10.530796 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 12:12:10.532304 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677046171/tls.crt::/tmp/serving-cert-2677046171/tls.key\\\\\\\"\\\\nI0127 12:12:11.168792 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 12:12:11.171746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 12:12:11.171768 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 12:12:11.171787 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 12:12:11.171794 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 12:12:11.176852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 12:12:11.176874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 12:12:11.176883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 12:12:11.176886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 12:12:11.176888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 12:12:11.176891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 12:12:11.177099 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 12:12:11.178543 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:11:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:11:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:11:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.787670 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.788825 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.788865 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.788890 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.788903 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.788911 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:12Z","lastTransitionTime":"2026-01-27T12:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.802577 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f42f00cae9d4ab4f167577f37621c787d1cff9d1b596c1c9a8a85ec37b5853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.816136 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.829792 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c1c5d3-2d9a-4bfe-8afe-69504e9fa0ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eaf34407c3ea76cda570485ee81a6188c5b1c441dd736272afcb42b0fe1e506a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d7f9c411e2e7282dfdef567cba43b3abd32488413e66f6459ac316434e3af8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c0cc2dee09b562bf2345fbf0f397add3891df2299372c977b35e6f66ad127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://685699a5340bef34497164a380c9ba5f482a76e830bfe8ea1dfcf1ed37abcf23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ff756c50afbf0af7b3a03751d0b600dd922080b2cedb6255b404198e3d9bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89761e977a5c0199a840137e8390932e1d64e586e1259f178f0535aae4040c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f924c8569901055e92d2fe72ffc56287f6bc96488154671dbe17a6e2d0edc46e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzt5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gc8mv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.847529 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b1987b-69bb-4768-a874-5a97b3327469\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:13:10Z\\\",\\\"message\\\":\\\"e.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:10Z is after 2025-08-24T17:21:41Z]\\\\nI0127 12:13:10.788572 6868 services_controller.go:444] Built service openshift-machine-config-operator/machine-config-operator LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 12:13:10.788587 6868 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI0127 12:13:10.788600 6868 services_controller.go:451] Built service openshift-machine-config-operator/machine-config-operator cluster-wide LB for network=d\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:13:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bnfh4_openshift-ovn-kubernetes(26b1987b-69bb-4768-a874-5a97b3327469)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8whl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnfh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.860107 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.870818 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-97hlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c438e876-f4c1-42ca-b935-b5e58be9cfb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa0c592826e1a106ac51a6142fa5d252f837bafe03f4cdc94832dbb4704e9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T12:13:02Z\\\",\\\"message\\\":\\\"2026-01-27T12:12:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b76ff76f-f2e7-4ef4-a337-33f2955f9664\\\\n2026-01-27T12:12:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b76ff76f-f2e7-4ef4-a337-33f2955f9664 to /host/opt/cni/bin/\\\\n2026-01-27T12:12:16Z [verbose] multus-daemon started\\\\n2026-01-27T12:12:16Z [verbose] Readiness Indicator file check\\\\n2026-01-27T12:13:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:13:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf5tf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-97hlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.883893 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49a22b36-6ae4-4887-b364-7d1ac21ff625\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e44678828c8cef58483ffd5fdd676499eca9bb5e7d42740103eaf235ad8fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltjnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfzkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.890876 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.890914 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.890922 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.890937 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.890945 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:12Z","lastTransitionTime":"2026-01-27T12:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.904179 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed462537-34be-41e5-a6cb-f8e385dbcf99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd28c1e66e65534a92e21787b94ac2fc6ba53cfd7e70a50a54bea18ddfc3797\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca350bc8a2f3ce6b6271662b78adfcbf42bdaea57a45d78132ab39d08c50562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t252\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z572k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.912919 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-swntl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1811fa8-9015-4fe0-8fad-2461d64cdffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7kh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T12:12:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-swntl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.923434 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d701d737127624770781419c241736b454bebb15079321615a976f0c5df4eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.938201 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T12:12:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://575286ec849835af71640c5fc66b7f6cb3771de69091ba88615396c0d4f67f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad465d24284e2402c9c7412008a33d1b31b50146964a99655b8ffd62325220be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T12:12:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.952397 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.952447 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.952459 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.952476 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.952490 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:12Z","lastTransitionTime":"2026-01-27T12:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:12 crc kubenswrapper[4745]: E0127 12:13:12.970083 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.974316 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.974445 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.974509 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.974581 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.974664 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:12Z","lastTransitionTime":"2026-01-27T12:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:12 crc kubenswrapper[4745]: E0127 12:13:12.992173 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.995905 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.995940 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.995951 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.995967 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:12 crc kubenswrapper[4745]: I0127 12:13:12.995977 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:12Z","lastTransitionTime":"2026-01-27T12:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:13 crc kubenswrapper[4745]: E0127 12:13:13.012494 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.016025 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.016082 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.016093 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.016105 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.016114 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:13Z","lastTransitionTime":"2026-01-27T12:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:13 crc kubenswrapper[4745]: E0127 12:13:13.031006 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.034543 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.034592 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.034600 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.034615 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.034624 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:13Z","lastTransitionTime":"2026-01-27T12:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:13 crc kubenswrapper[4745]: E0127 12:13:13.052591 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T12:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T12:13:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e36e6303-ebda-46d5-bb95-7bd7c6e607a6\\\",\\\"systemUUID\\\":\\\"24c1c5dd-133d-4b30-899a-c18b8017a82a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T12:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 12:13:13 crc kubenswrapper[4745]: E0127 12:13:13.052744 4745 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.054178 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.054230 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.054243 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.054259 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.054279 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:13Z","lastTransitionTime":"2026-01-27T12:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.073870 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:13 crc kubenswrapper[4745]: E0127 12:13:13.074018 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.073871 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.073877 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.073871 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:13 crc kubenswrapper[4745]: E0127 12:13:13.074234 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:13 crc kubenswrapper[4745]: E0127 12:13:13.074282 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:13 crc kubenswrapper[4745]: E0127 12:13:13.074106 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.083241 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 03:00:41.698962386 +0000 UTC Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.156470 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.156514 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.156556 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.156582 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.156597 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:13Z","lastTransitionTime":"2026-01-27T12:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.259026 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.259090 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.259109 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.259129 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.259145 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:13Z","lastTransitionTime":"2026-01-27T12:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.361654 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.361736 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.361764 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.361796 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.361880 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:13Z","lastTransitionTime":"2026-01-27T12:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.464143 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.464185 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.464199 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.464215 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.464227 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:13Z","lastTransitionTime":"2026-01-27T12:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.567121 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.567168 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.567177 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.567191 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.567200 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:13Z","lastTransitionTime":"2026-01-27T12:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.669346 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.669393 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.669404 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.669418 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.669427 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:13Z","lastTransitionTime":"2026-01-27T12:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.772177 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.772220 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.772233 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.772251 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.772263 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:13Z","lastTransitionTime":"2026-01-27T12:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.875124 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.875162 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.875174 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.875225 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.875237 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:13Z","lastTransitionTime":"2026-01-27T12:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.977769 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.977806 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.977836 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.977851 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:13 crc kubenswrapper[4745]: I0127 12:13:13.977866 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:13Z","lastTransitionTime":"2026-01-27T12:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.082935 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.082977 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.082986 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.083001 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.083011 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:14Z","lastTransitionTime":"2026-01-27T12:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.083742 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 14:21:39.555997662 +0000 UTC Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.084394 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.186095 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.186159 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.186184 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.186214 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.186236 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:14Z","lastTransitionTime":"2026-01-27T12:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.288519 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.288562 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.288576 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.288591 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.288600 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:14Z","lastTransitionTime":"2026-01-27T12:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.390914 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.391043 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.391073 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.391101 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.391124 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:14Z","lastTransitionTime":"2026-01-27T12:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.493782 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.493850 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.493871 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.493889 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.493901 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:14Z","lastTransitionTime":"2026-01-27T12:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.596402 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.596503 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.596524 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.596550 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.596568 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:14Z","lastTransitionTime":"2026-01-27T12:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.699796 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.700018 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.700041 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.700065 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.700082 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:14Z","lastTransitionTime":"2026-01-27T12:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.803249 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.803311 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.803328 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.803350 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.803368 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:14Z","lastTransitionTime":"2026-01-27T12:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.906769 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.907018 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.907110 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.907177 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.907235 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:14Z","lastTransitionTime":"2026-01-27T12:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.964877 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:13:14 crc kubenswrapper[4745]: E0127 12:13:14.965039 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:18.965010396 +0000 UTC m=+151.769921124 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.965139 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:14 crc kubenswrapper[4745]: I0127 12:13:14.965194 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:14 crc kubenswrapper[4745]: E0127 12:13:14.965263 4745 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 12:13:14 crc kubenswrapper[4745]: E0127 12:13:14.965311 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 12:14:18.965299685 +0000 UTC m=+151.770210383 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 12:13:14 crc kubenswrapper[4745]: E0127 12:13:14.965392 4745 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 12:13:14 crc kubenswrapper[4745]: E0127 12:13:14.965485 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 12:14:18.965465751 +0000 UTC m=+151.770376469 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.010229 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.010498 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.010576 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.010675 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.010766 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:15Z","lastTransitionTime":"2026-01-27T12:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.066877 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.067004 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:15 crc kubenswrapper[4745]: E0127 12:13:15.067225 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 12:13:15 crc kubenswrapper[4745]: E0127 12:13:15.067268 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 12:13:15 crc kubenswrapper[4745]: E0127 12:13:15.067294 4745 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:13:15 crc kubenswrapper[4745]: E0127 12:13:15.067443 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 12:14:19.067416967 +0000 UTC m=+151.872327695 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:13:15 crc kubenswrapper[4745]: E0127 12:13:15.067565 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 12:13:15 crc kubenswrapper[4745]: E0127 12:13:15.067595 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 12:13:15 crc kubenswrapper[4745]: E0127 12:13:15.067610 4745 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:13:15 crc kubenswrapper[4745]: E0127 12:13:15.067655 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 12:14:19.067639915 +0000 UTC m=+151.872550643 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.073572 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.073569 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.073705 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.073714 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:15 crc kubenswrapper[4745]: E0127 12:13:15.073934 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:15 crc kubenswrapper[4745]: E0127 12:13:15.074011 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:15 crc kubenswrapper[4745]: E0127 12:13:15.074079 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:15 crc kubenswrapper[4745]: E0127 12:13:15.074200 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.084913 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 03:23:00.510210082 +0000 UTC Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.113898 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.113938 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.113949 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.113965 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.113974 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:15Z","lastTransitionTime":"2026-01-27T12:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.216599 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.216655 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.216672 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.216806 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.216850 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:15Z","lastTransitionTime":"2026-01-27T12:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.319226 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.319302 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.319325 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.319355 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.319382 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:15Z","lastTransitionTime":"2026-01-27T12:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.422400 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.422486 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.422506 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.422530 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.422549 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:15Z","lastTransitionTime":"2026-01-27T12:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.524539 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.524576 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.524592 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.524607 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.524617 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:15Z","lastTransitionTime":"2026-01-27T12:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.628213 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.628278 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.628296 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.628319 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.628336 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:15Z","lastTransitionTime":"2026-01-27T12:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.734250 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.734308 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.734332 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.734356 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.734374 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:15Z","lastTransitionTime":"2026-01-27T12:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.837129 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.837181 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.837192 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.837207 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.837216 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:15Z","lastTransitionTime":"2026-01-27T12:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.939561 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.939639 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.939902 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.939939 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:15 crc kubenswrapper[4745]: I0127 12:13:15.939960 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:15Z","lastTransitionTime":"2026-01-27T12:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.041999 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.042065 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.042077 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.042092 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.042100 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:16Z","lastTransitionTime":"2026-01-27T12:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.085560 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 10:53:38.81280704 +0000 UTC Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.145250 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.145337 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.145350 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.145367 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.145377 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:16Z","lastTransitionTime":"2026-01-27T12:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.247651 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.247701 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.247714 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.247729 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.247739 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:16Z","lastTransitionTime":"2026-01-27T12:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.349802 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.349858 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.349869 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.349885 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.349897 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:16Z","lastTransitionTime":"2026-01-27T12:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.453004 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.453037 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.453045 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.453056 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.453066 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:16Z","lastTransitionTime":"2026-01-27T12:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.556379 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.556417 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.556429 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.556445 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.556459 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:16Z","lastTransitionTime":"2026-01-27T12:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.658895 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.658937 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.658946 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.658958 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.658967 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:16Z","lastTransitionTime":"2026-01-27T12:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.761686 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.761755 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.761768 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.761803 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.761845 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:16Z","lastTransitionTime":"2026-01-27T12:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.866670 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.866801 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.866897 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.866981 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.867010 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:16Z","lastTransitionTime":"2026-01-27T12:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.970053 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.970123 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.970143 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.970167 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:16 crc kubenswrapper[4745]: I0127 12:13:16.970185 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:16Z","lastTransitionTime":"2026-01-27T12:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.073083 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.073132 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.073092 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:17 crc kubenswrapper[4745]: E0127 12:13:17.073366 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:17 crc kubenswrapper[4745]: E0127 12:13:17.073579 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.073644 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:17 crc kubenswrapper[4745]: E0127 12:13:17.073704 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:17 crc kubenswrapper[4745]: E0127 12:13:17.073866 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.073913 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.074040 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.074067 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.074103 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.074127 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:17Z","lastTransitionTime":"2026-01-27T12:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.086013 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 16:51:31.707923501 +0000 UTC Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.177167 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.177231 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.177253 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.177276 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.177363 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:17Z","lastTransitionTime":"2026-01-27T12:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.280088 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.280151 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.280169 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.280201 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.280220 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:17Z","lastTransitionTime":"2026-01-27T12:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.388636 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.388688 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.388705 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.388728 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.388746 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:17Z","lastTransitionTime":"2026-01-27T12:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.498669 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.498990 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.499076 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.499167 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.499252 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:17Z","lastTransitionTime":"2026-01-27T12:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.602761 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.602848 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.602866 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.602890 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.602908 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:17Z","lastTransitionTime":"2026-01-27T12:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.705878 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.705930 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.705946 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.705967 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.705981 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:17Z","lastTransitionTime":"2026-01-27T12:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.808481 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.808544 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.808561 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.808584 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.808602 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:17Z","lastTransitionTime":"2026-01-27T12:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.911168 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.911212 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.911225 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.911241 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:17 crc kubenswrapper[4745]: I0127 12:13:17.911254 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:17Z","lastTransitionTime":"2026-01-27T12:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.013859 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.013911 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.013928 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.013950 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.013967 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:18Z","lastTransitionTime":"2026-01-27T12:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.086881 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 07:14:20.274903172 +0000 UTC Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.119316 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.119379 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.119398 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.119426 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.119462 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:18Z","lastTransitionTime":"2026-01-27T12:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.142721 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-97hlh" podStartSLOduration=68.142693716 podStartE2EDuration="1m8.142693716s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:13:18.126371126 +0000 UTC m=+90.931281884" watchObservedRunningTime="2026-01-27 12:13:18.142693716 +0000 UTC m=+90.947604414" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.159381 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podStartSLOduration=68.159359698 podStartE2EDuration="1m8.159359698s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:13:18.143934448 +0000 UTC m=+90.948845176" watchObservedRunningTime="2026-01-27 12:13:18.159359698 +0000 UTC m=+90.964270396" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.159547 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z572k" podStartSLOduration=67.159542954 podStartE2EDuration="1m7.159542954s" podCreationTimestamp="2026-01-27 12:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:13:18.159186712 +0000 UTC m=+90.964097410" watchObservedRunningTime="2026-01-27 12:13:18.159542954 +0000 UTC m=+90.964453652" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.214928 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=4.21491356 podStartE2EDuration="4.21491356s" podCreationTimestamp="2026-01-27 12:13:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:13:18.201144576 +0000 UTC m=+91.006055264" watchObservedRunningTime="2026-01-27 12:13:18.21491356 +0000 UTC m=+91.019824248" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.215025 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=61.215022144 podStartE2EDuration="1m1.215022144s" podCreationTimestamp="2026-01-27 12:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:13:18.214447304 +0000 UTC m=+91.019357992" watchObservedRunningTime="2026-01-27 12:13:18.215022144 +0000 UTC m=+91.019932832" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.221977 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.222012 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.222023 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.222039 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.222051 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:18Z","lastTransitionTime":"2026-01-27T12:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.244549 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-4x9px" podStartSLOduration=68.244530068 podStartE2EDuration="1m8.244530068s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:13:18.227803375 +0000 UTC m=+91.032714063" watchObservedRunningTime="2026-01-27 12:13:18.244530068 +0000 UTC m=+91.049440756" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.258996 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=35.258977205 podStartE2EDuration="35.258977205s" podCreationTimestamp="2026-01-27 12:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:13:18.245022685 +0000 UTC m=+91.049933363" watchObservedRunningTime="2026-01-27 12:13:18.258977205 +0000 UTC m=+91.063887893" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.289104 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=66.28908294 podStartE2EDuration="1m6.28908294s" podCreationTimestamp="2026-01-27 12:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:13:18.287272949 +0000 UTC m=+91.092183637" watchObservedRunningTime="2026-01-27 12:13:18.28908294 +0000 UTC m=+91.093993628" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.289256 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-5d8gm" podStartSLOduration=68.289237305 podStartE2EDuration="1m8.289237305s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:13:18.271479407 +0000 UTC m=+91.076390095" watchObservedRunningTime="2026-01-27 12:13:18.289237305 +0000 UTC m=+91.094147993" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.323264 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.323291 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.323299 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.323311 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.323319 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:18Z","lastTransitionTime":"2026-01-27T12:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.381349 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-gc8mv" podStartSLOduration=68.381324769 podStartE2EDuration="1m8.381324769s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:13:18.356271375 +0000 UTC m=+91.161182063" watchObservedRunningTime="2026-01-27 12:13:18.381324769 +0000 UTC m=+91.186235457" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.425454 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.425504 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.425519 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.425538 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.425549 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:18Z","lastTransitionTime":"2026-01-27T12:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.528764 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.528843 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.528862 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.528885 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.528904 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:18Z","lastTransitionTime":"2026-01-27T12:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.630854 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.630914 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.630931 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.630956 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.630976 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:18Z","lastTransitionTime":"2026-01-27T12:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.733398 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.733471 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.733490 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.733518 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.733535 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:18Z","lastTransitionTime":"2026-01-27T12:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.836267 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.836313 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.836324 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.836340 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.836350 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:18Z","lastTransitionTime":"2026-01-27T12:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.939417 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.939466 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.939477 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.939495 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:18 crc kubenswrapper[4745]: I0127 12:13:18.939507 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:18Z","lastTransitionTime":"2026-01-27T12:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.041457 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.041511 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.041529 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.041548 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.041565 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:19Z","lastTransitionTime":"2026-01-27T12:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.073126 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.073188 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.073225 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:19 crc kubenswrapper[4745]: E0127 12:13:19.073266 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.073131 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:19 crc kubenswrapper[4745]: E0127 12:13:19.073415 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:19 crc kubenswrapper[4745]: E0127 12:13:19.073478 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:19 crc kubenswrapper[4745]: E0127 12:13:19.073544 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.087959 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 13:05:06.08089057 +0000 UTC Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.144007 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.144066 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.144077 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.144102 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.144128 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:19Z","lastTransitionTime":"2026-01-27T12:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.247745 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.247850 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.247868 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.247894 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.247912 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:19Z","lastTransitionTime":"2026-01-27T12:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.350417 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.350481 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.350494 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.350510 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.350522 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:19Z","lastTransitionTime":"2026-01-27T12:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.453221 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.453274 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.453290 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.453312 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.453329 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:19Z","lastTransitionTime":"2026-01-27T12:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.556572 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.556623 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.556634 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.556653 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.556666 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:19Z","lastTransitionTime":"2026-01-27T12:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.659254 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.659310 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.659323 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.659344 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.659358 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:19Z","lastTransitionTime":"2026-01-27T12:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.761594 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.761656 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.761676 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.761706 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.761726 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:19Z","lastTransitionTime":"2026-01-27T12:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.865216 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.865282 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.865301 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.865325 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.865343 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:19Z","lastTransitionTime":"2026-01-27T12:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.968000 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.968344 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.968403 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.968441 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:19 crc kubenswrapper[4745]: I0127 12:13:19.968480 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:19Z","lastTransitionTime":"2026-01-27T12:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.071799 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.072108 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.072188 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.072267 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.072388 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:20Z","lastTransitionTime":"2026-01-27T12:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.089006 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 09:32:23.244321025 +0000 UTC Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.176736 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.176829 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.176847 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.176870 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.176886 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:20Z","lastTransitionTime":"2026-01-27T12:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.279483 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.279533 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.279551 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.279573 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.279589 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:20Z","lastTransitionTime":"2026-01-27T12:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.381557 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.381621 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.381633 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.381648 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.381659 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:20Z","lastTransitionTime":"2026-01-27T12:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.485020 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.485064 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.485076 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.485093 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.485105 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:20Z","lastTransitionTime":"2026-01-27T12:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.588297 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.588353 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.588369 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.588391 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.588408 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:20Z","lastTransitionTime":"2026-01-27T12:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.691515 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.691563 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.691578 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.691597 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.691613 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:20Z","lastTransitionTime":"2026-01-27T12:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.794156 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.794219 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.794236 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.794260 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.794279 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:20Z","lastTransitionTime":"2026-01-27T12:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.897556 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.898017 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.898061 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.898090 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:20 crc kubenswrapper[4745]: I0127 12:13:20.898108 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:20Z","lastTransitionTime":"2026-01-27T12:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.000093 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.000137 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.000166 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.000181 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.000189 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:21Z","lastTransitionTime":"2026-01-27T12:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.073678 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.073729 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.073734 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.073690 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:21 crc kubenswrapper[4745]: E0127 12:13:21.073860 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:21 crc kubenswrapper[4745]: E0127 12:13:21.073962 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:21 crc kubenswrapper[4745]: E0127 12:13:21.074018 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:21 crc kubenswrapper[4745]: E0127 12:13:21.074131 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.089500 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 10:18:42.132259775 +0000 UTC Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.102714 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.102765 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.102780 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.102800 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.102832 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:21Z","lastTransitionTime":"2026-01-27T12:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.206314 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.206365 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.206381 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.206404 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.206422 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:21Z","lastTransitionTime":"2026-01-27T12:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.309768 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.309851 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.309873 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.309898 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.309916 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:21Z","lastTransitionTime":"2026-01-27T12:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.412775 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.412887 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.412905 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.412928 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.412946 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:21Z","lastTransitionTime":"2026-01-27T12:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.516247 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.516300 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.516312 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.516332 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.516344 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:21Z","lastTransitionTime":"2026-01-27T12:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.619705 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.619788 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.619840 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.619872 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.619896 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:21Z","lastTransitionTime":"2026-01-27T12:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.735507 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.735582 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.735600 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.735626 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.735644 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:21Z","lastTransitionTime":"2026-01-27T12:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.838849 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.838913 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.838934 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.838953 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.838966 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:21Z","lastTransitionTime":"2026-01-27T12:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.941913 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.941954 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.941963 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.941976 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:21 crc kubenswrapper[4745]: I0127 12:13:21.941985 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:21Z","lastTransitionTime":"2026-01-27T12:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.044870 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.044922 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.044934 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.044948 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.044960 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:22Z","lastTransitionTime":"2026-01-27T12:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.089617 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 11:18:11.859134899 +0000 UTC Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.148598 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.148666 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.148684 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.148708 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.148731 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:22Z","lastTransitionTime":"2026-01-27T12:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.251645 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.251714 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.251737 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.251766 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.251789 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:22Z","lastTransitionTime":"2026-01-27T12:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.354317 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.354383 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.354406 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.354434 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.354456 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:22Z","lastTransitionTime":"2026-01-27T12:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.456669 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.456723 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.456745 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.456772 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.456797 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:22Z","lastTransitionTime":"2026-01-27T12:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.561288 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.561366 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.561400 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.561431 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.561452 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:22Z","lastTransitionTime":"2026-01-27T12:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.664850 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.664938 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.664965 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.664995 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.665017 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:22Z","lastTransitionTime":"2026-01-27T12:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.767383 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.767432 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.767444 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.767457 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.767469 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:22Z","lastTransitionTime":"2026-01-27T12:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.870429 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.870501 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.870516 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.870537 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.870566 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:22Z","lastTransitionTime":"2026-01-27T12:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.973227 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.973276 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.973295 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.973317 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:22 crc kubenswrapper[4745]: I0127 12:13:22.973333 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:22Z","lastTransitionTime":"2026-01-27T12:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.080199 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.080290 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.080357 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.080223 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:23 crc kubenswrapper[4745]: E0127 12:13:23.080486 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:23 crc kubenswrapper[4745]: E0127 12:13:23.080688 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:23 crc kubenswrapper[4745]: E0127 12:13:23.080788 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:23 crc kubenswrapper[4745]: E0127 12:13:23.081005 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.082613 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.082674 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.082697 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.082725 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.082748 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:23Z","lastTransitionTime":"2026-01-27T12:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.090366 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 22:36:42.657174852 +0000 UTC Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.194557 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.194592 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.194604 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.194620 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.194633 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:23Z","lastTransitionTime":"2026-01-27T12:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.297519 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.297570 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.297587 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.297613 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.297630 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:23Z","lastTransitionTime":"2026-01-27T12:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.324537 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.324649 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.324675 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.324702 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.324724 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T12:13:23Z","lastTransitionTime":"2026-01-27T12:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.383789 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-l78ks"] Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.384537 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l78ks" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.386980 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.387086 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.387205 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.387656 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.396741 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a045d36-5430-462f-991d-8140db1eb0cd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-l78ks\" (UID: \"9a045d36-5430-462f-991d-8140db1eb0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l78ks" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.396838 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a045d36-5430-462f-991d-8140db1eb0cd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-l78ks\" (UID: \"9a045d36-5430-462f-991d-8140db1eb0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l78ks" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.396989 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9a045d36-5430-462f-991d-8140db1eb0cd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-l78ks\" (UID: \"9a045d36-5430-462f-991d-8140db1eb0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l78ks" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.397113 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9a045d36-5430-462f-991d-8140db1eb0cd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-l78ks\" (UID: \"9a045d36-5430-462f-991d-8140db1eb0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l78ks" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.397204 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9a045d36-5430-462f-991d-8140db1eb0cd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-l78ks\" (UID: \"9a045d36-5430-462f-991d-8140db1eb0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l78ks" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.498489 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9a045d36-5430-462f-991d-8140db1eb0cd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-l78ks\" (UID: \"9a045d36-5430-462f-991d-8140db1eb0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l78ks" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.498587 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9a045d36-5430-462f-991d-8140db1eb0cd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-l78ks\" (UID: \"9a045d36-5430-462f-991d-8140db1eb0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l78ks" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.498702 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9a045d36-5430-462f-991d-8140db1eb0cd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-l78ks\" (UID: \"9a045d36-5430-462f-991d-8140db1eb0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l78ks" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.498753 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9a045d36-5430-462f-991d-8140db1eb0cd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-l78ks\" (UID: \"9a045d36-5430-462f-991d-8140db1eb0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l78ks" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.498879 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9a045d36-5430-462f-991d-8140db1eb0cd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-l78ks\" (UID: \"9a045d36-5430-462f-991d-8140db1eb0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l78ks" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.498904 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a045d36-5430-462f-991d-8140db1eb0cd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-l78ks\" (UID: \"9a045d36-5430-462f-991d-8140db1eb0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l78ks" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.498982 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a045d36-5430-462f-991d-8140db1eb0cd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-l78ks\" (UID: \"9a045d36-5430-462f-991d-8140db1eb0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l78ks" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.500760 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9a045d36-5430-462f-991d-8140db1eb0cd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-l78ks\" (UID: \"9a045d36-5430-462f-991d-8140db1eb0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l78ks" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.505584 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a045d36-5430-462f-991d-8140db1eb0cd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-l78ks\" (UID: \"9a045d36-5430-462f-991d-8140db1eb0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l78ks" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.532345 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a045d36-5430-462f-991d-8140db1eb0cd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-l78ks\" (UID: \"9a045d36-5430-462f-991d-8140db1eb0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l78ks" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.708887 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l78ks" Jan 27 12:13:23 crc kubenswrapper[4745]: I0127 12:13:23.764757 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l78ks" event={"ID":"9a045d36-5430-462f-991d-8140db1eb0cd","Type":"ContainerStarted","Data":"ab5df1fb08a3884f18ec0123eef8d94f51362458387ced041b779f54e31c4dcd"} Jan 27 12:13:24 crc kubenswrapper[4745]: I0127 12:13:24.091472 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 10:01:20.194518439 +0000 UTC Jan 27 12:13:24 crc kubenswrapper[4745]: I0127 12:13:24.091512 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 12:13:24 crc kubenswrapper[4745]: I0127 12:13:24.100587 4745 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 12:13:24 crc kubenswrapper[4745]: I0127 12:13:24.769090 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l78ks" event={"ID":"9a045d36-5430-462f-991d-8140db1eb0cd","Type":"ContainerStarted","Data":"e809c3fc81212ed842036c7725a6fd844deaf5066ef89d8abb8d75dab8aea1e6"} Jan 27 12:13:24 crc kubenswrapper[4745]: I0127 12:13:24.793692 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l78ks" podStartSLOduration=74.79360231 podStartE2EDuration="1m14.79360231s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:13:24.789307595 +0000 UTC m=+97.594218303" watchObservedRunningTime="2026-01-27 12:13:24.79360231 +0000 UTC m=+97.598513038" Jan 27 12:13:25 crc kubenswrapper[4745]: I0127 12:13:25.073393 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:25 crc kubenswrapper[4745]: I0127 12:13:25.073577 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:25 crc kubenswrapper[4745]: E0127 12:13:25.073621 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:25 crc kubenswrapper[4745]: I0127 12:13:25.073679 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:25 crc kubenswrapper[4745]: I0127 12:13:25.073723 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:25 crc kubenswrapper[4745]: E0127 12:13:25.073872 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:25 crc kubenswrapper[4745]: E0127 12:13:25.073940 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:25 crc kubenswrapper[4745]: E0127 12:13:25.074012 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:26 crc kubenswrapper[4745]: I0127 12:13:26.074783 4745 scope.go:117] "RemoveContainer" containerID="17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627" Jan 27 12:13:26 crc kubenswrapper[4745]: E0127 12:13:26.075120 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bnfh4_openshift-ovn-kubernetes(26b1987b-69bb-4768-a874-5a97b3327469)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" podUID="26b1987b-69bb-4768-a874-5a97b3327469" Jan 27 12:13:27 crc kubenswrapper[4745]: I0127 12:13:27.072802 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:27 crc kubenswrapper[4745]: I0127 12:13:27.072877 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:27 crc kubenswrapper[4745]: I0127 12:13:27.072924 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:27 crc kubenswrapper[4745]: I0127 12:13:27.072850 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:27 crc kubenswrapper[4745]: E0127 12:13:27.073018 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:27 crc kubenswrapper[4745]: E0127 12:13:27.073127 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:27 crc kubenswrapper[4745]: E0127 12:13:27.073254 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:27 crc kubenswrapper[4745]: E0127 12:13:27.073365 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:29 crc kubenswrapper[4745]: I0127 12:13:29.073926 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:29 crc kubenswrapper[4745]: I0127 12:13:29.073976 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:29 crc kubenswrapper[4745]: I0127 12:13:29.074023 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:29 crc kubenswrapper[4745]: I0127 12:13:29.074068 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:29 crc kubenswrapper[4745]: E0127 12:13:29.074386 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:29 crc kubenswrapper[4745]: E0127 12:13:29.074555 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:29 crc kubenswrapper[4745]: E0127 12:13:29.074649 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:29 crc kubenswrapper[4745]: E0127 12:13:29.074736 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:29 crc kubenswrapper[4745]: I0127 12:13:29.483800 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs\") pod \"network-metrics-daemon-swntl\" (UID: \"c1811fa8-9015-4fe0-8fad-2461d64cdffd\") " pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:29 crc kubenswrapper[4745]: E0127 12:13:29.484054 4745 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 12:13:29 crc kubenswrapper[4745]: E0127 12:13:29.484150 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs podName:c1811fa8-9015-4fe0-8fad-2461d64cdffd nodeName:}" failed. No retries permitted until 2026-01-27 12:14:33.484128203 +0000 UTC m=+166.289038901 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs") pod "network-metrics-daemon-swntl" (UID: "c1811fa8-9015-4fe0-8fad-2461d64cdffd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 12:13:31 crc kubenswrapper[4745]: I0127 12:13:31.072850 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:31 crc kubenswrapper[4745]: I0127 12:13:31.072900 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:31 crc kubenswrapper[4745]: E0127 12:13:31.073434 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:31 crc kubenswrapper[4745]: I0127 12:13:31.072959 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:31 crc kubenswrapper[4745]: I0127 12:13:31.072934 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:31 crc kubenswrapper[4745]: E0127 12:13:31.073538 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:31 crc kubenswrapper[4745]: E0127 12:13:31.073313 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:31 crc kubenswrapper[4745]: E0127 12:13:31.073741 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:33 crc kubenswrapper[4745]: I0127 12:13:33.073159 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:33 crc kubenswrapper[4745]: I0127 12:13:33.073230 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:33 crc kubenswrapper[4745]: I0127 12:13:33.073328 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:33 crc kubenswrapper[4745]: E0127 12:13:33.073435 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:33 crc kubenswrapper[4745]: I0127 12:13:33.073455 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:33 crc kubenswrapper[4745]: E0127 12:13:33.073558 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:33 crc kubenswrapper[4745]: E0127 12:13:33.073674 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:33 crc kubenswrapper[4745]: E0127 12:13:33.074000 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:33 crc kubenswrapper[4745]: I0127 12:13:33.089053 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 12:13:35 crc kubenswrapper[4745]: I0127 12:13:35.073323 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:35 crc kubenswrapper[4745]: I0127 12:13:35.073383 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:35 crc kubenswrapper[4745]: I0127 12:13:35.073419 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:35 crc kubenswrapper[4745]: E0127 12:13:35.073496 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:35 crc kubenswrapper[4745]: I0127 12:13:35.073347 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:35 crc kubenswrapper[4745]: E0127 12:13:35.073624 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:35 crc kubenswrapper[4745]: E0127 12:13:35.073712 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:35 crc kubenswrapper[4745]: E0127 12:13:35.073839 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:37 crc kubenswrapper[4745]: I0127 12:13:37.073652 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:37 crc kubenswrapper[4745]: I0127 12:13:37.073783 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:37 crc kubenswrapper[4745]: I0127 12:13:37.073683 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:37 crc kubenswrapper[4745]: E0127 12:13:37.074045 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:37 crc kubenswrapper[4745]: I0127 12:13:37.073678 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:37 crc kubenswrapper[4745]: E0127 12:13:37.073797 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:37 crc kubenswrapper[4745]: E0127 12:13:37.074497 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:37 crc kubenswrapper[4745]: E0127 12:13:37.075933 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:38 crc kubenswrapper[4745]: I0127 12:13:38.119532 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=5.119513054 podStartE2EDuration="5.119513054s" podCreationTimestamp="2026-01-27 12:13:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:13:38.117366362 +0000 UTC m=+110.922277100" watchObservedRunningTime="2026-01-27 12:13:38.119513054 +0000 UTC m=+110.924423742" Jan 27 12:13:39 crc kubenswrapper[4745]: I0127 12:13:39.073296 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:39 crc kubenswrapper[4745]: I0127 12:13:39.073300 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:39 crc kubenswrapper[4745]: E0127 12:13:39.073431 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:39 crc kubenswrapper[4745]: I0127 12:13:39.073299 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:39 crc kubenswrapper[4745]: E0127 12:13:39.073532 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:39 crc kubenswrapper[4745]: I0127 12:13:39.073724 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:39 crc kubenswrapper[4745]: E0127 12:13:39.073901 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:39 crc kubenswrapper[4745]: E0127 12:13:39.074007 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:41 crc kubenswrapper[4745]: I0127 12:13:41.074234 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:41 crc kubenswrapper[4745]: E0127 12:13:41.075054 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:41 crc kubenswrapper[4745]: I0127 12:13:41.074404 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:41 crc kubenswrapper[4745]: E0127 12:13:41.075310 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:41 crc kubenswrapper[4745]: I0127 12:13:41.074333 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:41 crc kubenswrapper[4745]: I0127 12:13:41.075487 4745 scope.go:117] "RemoveContainer" containerID="17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627" Jan 27 12:13:41 crc kubenswrapper[4745]: I0127 12:13:41.074440 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:41 crc kubenswrapper[4745]: E0127 12:13:41.075501 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:41 crc kubenswrapper[4745]: E0127 12:13:41.075719 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:41 crc kubenswrapper[4745]: E0127 12:13:41.075724 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bnfh4_openshift-ovn-kubernetes(26b1987b-69bb-4768-a874-5a97b3327469)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" podUID="26b1987b-69bb-4768-a874-5a97b3327469" Jan 27 12:13:43 crc kubenswrapper[4745]: I0127 12:13:43.073269 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:43 crc kubenswrapper[4745]: I0127 12:13:43.073304 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:43 crc kubenswrapper[4745]: E0127 12:13:43.073403 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:43 crc kubenswrapper[4745]: I0127 12:13:43.073268 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:43 crc kubenswrapper[4745]: E0127 12:13:43.073649 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:43 crc kubenswrapper[4745]: E0127 12:13:43.073835 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:43 crc kubenswrapper[4745]: I0127 12:13:43.073939 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:43 crc kubenswrapper[4745]: E0127 12:13:43.074068 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:45 crc kubenswrapper[4745]: I0127 12:13:45.073199 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:45 crc kubenswrapper[4745]: I0127 12:13:45.073249 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:45 crc kubenswrapper[4745]: I0127 12:13:45.073235 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:45 crc kubenswrapper[4745]: I0127 12:13:45.073206 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:45 crc kubenswrapper[4745]: E0127 12:13:45.073354 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:45 crc kubenswrapper[4745]: E0127 12:13:45.073478 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:45 crc kubenswrapper[4745]: E0127 12:13:45.073522 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:45 crc kubenswrapper[4745]: E0127 12:13:45.073597 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:47 crc kubenswrapper[4745]: I0127 12:13:47.073640 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:47 crc kubenswrapper[4745]: I0127 12:13:47.073728 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:47 crc kubenswrapper[4745]: E0127 12:13:47.073782 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:47 crc kubenswrapper[4745]: I0127 12:13:47.073800 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:47 crc kubenswrapper[4745]: I0127 12:13:47.073870 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:47 crc kubenswrapper[4745]: E0127 12:13:47.074112 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:47 crc kubenswrapper[4745]: E0127 12:13:47.074307 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:47 crc kubenswrapper[4745]: E0127 12:13:47.074388 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:48 crc kubenswrapper[4745]: E0127 12:13:48.077554 4745 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 12:13:48 crc kubenswrapper[4745]: E0127 12:13:48.338960 4745 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 12:13:48 crc kubenswrapper[4745]: I0127 12:13:48.842502 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-97hlh_c438e876-f4c1-42ca-b935-b5e58be9cfb2/kube-multus/1.log" Jan 27 12:13:48 crc kubenswrapper[4745]: I0127 12:13:48.843206 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-97hlh_c438e876-f4c1-42ca-b935-b5e58be9cfb2/kube-multus/0.log" Jan 27 12:13:48 crc kubenswrapper[4745]: I0127 12:13:48.843261 4745 generic.go:334] "Generic (PLEG): container finished" podID="c438e876-f4c1-42ca-b935-b5e58be9cfb2" containerID="baa0c592826e1a106ac51a6142fa5d252f837bafe03f4cdc94832dbb4704e9f3" exitCode=1 Jan 27 12:13:48 crc kubenswrapper[4745]: I0127 12:13:48.843300 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-97hlh" event={"ID":"c438e876-f4c1-42ca-b935-b5e58be9cfb2","Type":"ContainerDied","Data":"baa0c592826e1a106ac51a6142fa5d252f837bafe03f4cdc94832dbb4704e9f3"} Jan 27 12:13:48 crc kubenswrapper[4745]: I0127 12:13:48.843340 4745 scope.go:117] "RemoveContainer" containerID="910d600b548eefbc3d29c1b6ed8d47b5aaac7117ccf060cd56449d8519665966" Jan 27 12:13:48 crc kubenswrapper[4745]: I0127 12:13:48.843738 4745 scope.go:117] "RemoveContainer" containerID="baa0c592826e1a106ac51a6142fa5d252f837bafe03f4cdc94832dbb4704e9f3" Jan 27 12:13:48 crc kubenswrapper[4745]: E0127 12:13:48.843913 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-97hlh_openshift-multus(c438e876-f4c1-42ca-b935-b5e58be9cfb2)\"" pod="openshift-multus/multus-97hlh" podUID="c438e876-f4c1-42ca-b935-b5e58be9cfb2" Jan 27 12:13:49 crc kubenswrapper[4745]: I0127 12:13:49.073364 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:49 crc kubenswrapper[4745]: I0127 12:13:49.073388 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:49 crc kubenswrapper[4745]: E0127 12:13:49.073499 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:49 crc kubenswrapper[4745]: I0127 12:13:49.073648 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:49 crc kubenswrapper[4745]: E0127 12:13:49.073699 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:49 crc kubenswrapper[4745]: I0127 12:13:49.073787 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:49 crc kubenswrapper[4745]: E0127 12:13:49.073868 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:49 crc kubenswrapper[4745]: E0127 12:13:49.074052 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:49 crc kubenswrapper[4745]: I0127 12:13:49.848104 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-97hlh_c438e876-f4c1-42ca-b935-b5e58be9cfb2/kube-multus/1.log" Jan 27 12:13:51 crc kubenswrapper[4745]: I0127 12:13:51.073163 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:51 crc kubenswrapper[4745]: I0127 12:13:51.073187 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:51 crc kubenswrapper[4745]: E0127 12:13:51.073374 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:51 crc kubenswrapper[4745]: I0127 12:13:51.073204 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:51 crc kubenswrapper[4745]: E0127 12:13:51.073458 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:51 crc kubenswrapper[4745]: E0127 12:13:51.073483 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:51 crc kubenswrapper[4745]: I0127 12:13:51.073184 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:51 crc kubenswrapper[4745]: E0127 12:13:51.073575 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:53 crc kubenswrapper[4745]: I0127 12:13:53.072942 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:53 crc kubenswrapper[4745]: I0127 12:13:53.073054 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:53 crc kubenswrapper[4745]: I0127 12:13:53.072970 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:53 crc kubenswrapper[4745]: E0127 12:13:53.073143 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:53 crc kubenswrapper[4745]: I0127 12:13:53.073205 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:53 crc kubenswrapper[4745]: E0127 12:13:53.073374 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:53 crc kubenswrapper[4745]: E0127 12:13:53.073704 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:53 crc kubenswrapper[4745]: E0127 12:13:53.073765 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:53 crc kubenswrapper[4745]: E0127 12:13:53.340301 4745 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 12:13:54 crc kubenswrapper[4745]: I0127 12:13:54.075251 4745 scope.go:117] "RemoveContainer" containerID="17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627" Jan 27 12:13:54 crc kubenswrapper[4745]: I0127 12:13:54.875693 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnfh4_26b1987b-69bb-4768-a874-5a97b3327469/ovnkube-controller/3.log" Jan 27 12:13:54 crc kubenswrapper[4745]: I0127 12:13:54.878846 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerStarted","Data":"60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8"} Jan 27 12:13:54 crc kubenswrapper[4745]: I0127 12:13:54.879414 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:13:54 crc kubenswrapper[4745]: I0127 12:13:54.909307 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" podStartSLOduration=104.90928034 podStartE2EDuration="1m44.90928034s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:13:54.908477673 +0000 UTC m=+127.713388361" watchObservedRunningTime="2026-01-27 12:13:54.90928034 +0000 UTC m=+127.714191058" Jan 27 12:13:55 crc kubenswrapper[4745]: I0127 12:13:55.073010 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:55 crc kubenswrapper[4745]: I0127 12:13:55.073025 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:55 crc kubenswrapper[4745]: I0127 12:13:55.073027 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:55 crc kubenswrapper[4745]: I0127 12:13:55.073145 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:55 crc kubenswrapper[4745]: E0127 12:13:55.073260 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:55 crc kubenswrapper[4745]: E0127 12:13:55.073502 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:55 crc kubenswrapper[4745]: E0127 12:13:55.073596 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:55 crc kubenswrapper[4745]: E0127 12:13:55.073713 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:55 crc kubenswrapper[4745]: I0127 12:13:55.322752 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-swntl"] Jan 27 12:13:55 crc kubenswrapper[4745]: I0127 12:13:55.881925 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:55 crc kubenswrapper[4745]: E0127 12:13:55.882133 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:57 crc kubenswrapper[4745]: I0127 12:13:57.073330 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:57 crc kubenswrapper[4745]: I0127 12:13:57.073356 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:57 crc kubenswrapper[4745]: I0127 12:13:57.073419 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:57 crc kubenswrapper[4745]: I0127 12:13:57.073504 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:57 crc kubenswrapper[4745]: E0127 12:13:57.073794 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:57 crc kubenswrapper[4745]: E0127 12:13:57.074054 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:57 crc kubenswrapper[4745]: E0127 12:13:57.074443 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:13:57 crc kubenswrapper[4745]: E0127 12:13:57.075009 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:58 crc kubenswrapper[4745]: E0127 12:13:58.341156 4745 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 12:13:59 crc kubenswrapper[4745]: I0127 12:13:59.072995 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:13:59 crc kubenswrapper[4745]: I0127 12:13:59.073083 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:13:59 crc kubenswrapper[4745]: I0127 12:13:59.073132 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:13:59 crc kubenswrapper[4745]: I0127 12:13:59.073177 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:13:59 crc kubenswrapper[4745]: E0127 12:13:59.073250 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:13:59 crc kubenswrapper[4745]: E0127 12:13:59.073403 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:13:59 crc kubenswrapper[4745]: E0127 12:13:59.073571 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:13:59 crc kubenswrapper[4745]: E0127 12:13:59.073697 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:14:01 crc kubenswrapper[4745]: I0127 12:14:01.073611 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:14:01 crc kubenswrapper[4745]: I0127 12:14:01.073659 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:14:01 crc kubenswrapper[4745]: E0127 12:14:01.073773 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:14:01 crc kubenswrapper[4745]: I0127 12:14:01.073876 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:14:01 crc kubenswrapper[4745]: I0127 12:14:01.073907 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:14:01 crc kubenswrapper[4745]: E0127 12:14:01.074088 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:14:01 crc kubenswrapper[4745]: E0127 12:14:01.074319 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:14:01 crc kubenswrapper[4745]: E0127 12:14:01.074336 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:14:01 crc kubenswrapper[4745]: I0127 12:14:01.074231 4745 scope.go:117] "RemoveContainer" containerID="baa0c592826e1a106ac51a6142fa5d252f837bafe03f4cdc94832dbb4704e9f3" Jan 27 12:14:01 crc kubenswrapper[4745]: I0127 12:14:01.906040 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-97hlh_c438e876-f4c1-42ca-b935-b5e58be9cfb2/kube-multus/1.log" Jan 27 12:14:01 crc kubenswrapper[4745]: I0127 12:14:01.906090 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-97hlh" event={"ID":"c438e876-f4c1-42ca-b935-b5e58be9cfb2","Type":"ContainerStarted","Data":"9e919995b5f66ba68c68e45fee6b3943248ac2b60f27245ab0acf28144661b43"} Jan 27 12:14:03 crc kubenswrapper[4745]: I0127 12:14:03.072700 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:14:03 crc kubenswrapper[4745]: E0127 12:14:03.073156 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:14:03 crc kubenswrapper[4745]: I0127 12:14:03.072755 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:14:03 crc kubenswrapper[4745]: I0127 12:14:03.072712 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:14:03 crc kubenswrapper[4745]: E0127 12:14:03.073236 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:14:03 crc kubenswrapper[4745]: I0127 12:14:03.072865 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:14:03 crc kubenswrapper[4745]: E0127 12:14:03.073305 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:14:03 crc kubenswrapper[4745]: E0127 12:14:03.073366 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:14:03 crc kubenswrapper[4745]: E0127 12:14:03.343283 4745 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 12:14:05 crc kubenswrapper[4745]: I0127 12:14:05.073709 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:14:05 crc kubenswrapper[4745]: I0127 12:14:05.073765 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:14:05 crc kubenswrapper[4745]: I0127 12:14:05.073865 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:14:05 crc kubenswrapper[4745]: E0127 12:14:05.073938 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:14:05 crc kubenswrapper[4745]: E0127 12:14:05.074075 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:14:05 crc kubenswrapper[4745]: E0127 12:14:05.074199 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:14:05 crc kubenswrapper[4745]: I0127 12:14:05.075003 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:14:05 crc kubenswrapper[4745]: E0127 12:14:05.075160 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:14:07 crc kubenswrapper[4745]: I0127 12:14:07.073617 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:14:07 crc kubenswrapper[4745]: I0127 12:14:07.073666 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:14:07 crc kubenswrapper[4745]: I0127 12:14:07.073723 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:14:07 crc kubenswrapper[4745]: E0127 12:14:07.073983 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 12:14:07 crc kubenswrapper[4745]: I0127 12:14:07.074039 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:14:07 crc kubenswrapper[4745]: E0127 12:14:07.074206 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 12:14:07 crc kubenswrapper[4745]: E0127 12:14:07.074259 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 12:14:07 crc kubenswrapper[4745]: E0127 12:14:07.074336 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-swntl" podUID="c1811fa8-9015-4fe0-8fad-2461d64cdffd" Jan 27 12:14:09 crc kubenswrapper[4745]: I0127 12:14:09.073247 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:14:09 crc kubenswrapper[4745]: I0127 12:14:09.073258 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:14:09 crc kubenswrapper[4745]: I0127 12:14:09.073278 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:14:09 crc kubenswrapper[4745]: I0127 12:14:09.073431 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:14:09 crc kubenswrapper[4745]: I0127 12:14:09.076866 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 12:14:09 crc kubenswrapper[4745]: I0127 12:14:09.077244 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 12:14:09 crc kubenswrapper[4745]: I0127 12:14:09.077360 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 12:14:09 crc kubenswrapper[4745]: I0127 12:14:09.077573 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 12:14:09 crc kubenswrapper[4745]: I0127 12:14:09.079025 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 12:14:09 crc kubenswrapper[4745]: I0127 12:14:09.079204 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 12:14:10 crc kubenswrapper[4745]: I0127 12:14:10.504899 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.469348 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.554179 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tf24j"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.554751 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.555028 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.555083 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.562494 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.562802 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.563224 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.564721 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.566161 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.577643 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.578184 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.579807 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.595710 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.596137 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.596266 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.596292 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.596389 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xhn4h"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.596945 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xhn4h" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.599792 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.600714 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.600897 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l2frr"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.601443 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.602932 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-hbsbc"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.603280 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hbsbc" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.603885 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.604437 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.607753 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.611078 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.611304 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.611429 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.611537 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.612306 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.614565 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.614793 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.614864 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-sssvv"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.624322 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7rjtn"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.624628 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lrlf2"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.625062 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-scl78"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.625476 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rxlbk"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.625861 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jqmqz"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.626233 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jqmqz" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.626544 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lrlf2" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.626991 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.627269 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rxlbk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.627562 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-scl78" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.615177 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.627768 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-sssvv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.629520 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.622119 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.629689 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.629802 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.622386 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.622411 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.622454 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.622470 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.622550 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.622619 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.622677 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.622721 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.622751 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.622782 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.622855 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.622943 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.622964 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.623038 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.623053 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.631630 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.631667 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.631712 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.632215 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.632779 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.632864 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.632997 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.633039 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.633135 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.633284 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.633325 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.633388 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.633999 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.631765 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hv45d"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.638754 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-zqrwf"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.639100 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ndbtg"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.639466 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.641469 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hv45d" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.641592 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.647480 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.648009 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.650335 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-s4vnk"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.651907 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4hbbw"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.661609 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s4vnk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.669400 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.670000 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.670351 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.670873 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.671336 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.671612 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.671799 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.693125 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.696310 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.697289 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.697508 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.697708 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.698263 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86448"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.698642 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.698688 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5tpbz"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.700104 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4hbbw" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.700337 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-44s8w"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.700524 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.700559 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86448" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.700712 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5tpbz" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701122 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cef45995-4242-499f-adeb-cc12aa630b5c-encryption-config\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701165 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-l2frr\" (UID: \"65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701194 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cef45995-4242-499f-adeb-cc12aa630b5c-audit-dir\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701219 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw9zf\" (UniqueName: \"kubernetes.io/projected/cef45995-4242-499f-adeb-cc12aa630b5c-kube-api-access-lw9zf\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701239 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ec327c-832f-4a20-9b99-1aa3315c312f-config\") pod \"route-controller-manager-6576b87f9c-8nsr4\" (UID: \"46ec327c-832f-4a20-9b99-1aa3315c312f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701264 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzzrx\" (UniqueName: \"kubernetes.io/projected/478908d6-765e-4bd8-a3ef-3142a7641a3b-kube-api-access-gzzrx\") pod \"downloads-7954f5f757-hbsbc\" (UID: \"478908d6-765e-4bd8-a3ef-3142a7641a3b\") " pod="openshift-console/downloads-7954f5f757-hbsbc" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701285 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78fe56b7-5ff3-4540-bfda-efeef43859f6-audit-dir\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701308 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46ec327c-832f-4a20-9b99-1aa3315c312f-client-ca\") pod \"route-controller-manager-6576b87f9c-8nsr4\" (UID: \"46ec327c-832f-4a20-9b99-1aa3315c312f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701341 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85z5f\" (UniqueName: \"kubernetes.io/projected/65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2-kube-api-access-85z5f\") pod \"openshift-config-operator-7777fb866f-l2frr\" (UID: \"65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701364 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef45995-4242-499f-adeb-cc12aa630b5c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701385 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea5a99ae-b999-419c-9da0-1333ba6378ea-config\") pod \"console-operator-58897d9998-xhn4h\" (UID: \"ea5a99ae-b999-419c-9da0-1333ba6378ea\") " pod="openshift-console-operator/console-operator-58897d9998-xhn4h" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701404 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6swl\" (UniqueName: \"kubernetes.io/projected/46ec327c-832f-4a20-9b99-1aa3315c312f-kube-api-access-c6swl\") pod \"route-controller-manager-6576b87f9c-8nsr4\" (UID: \"46ec327c-832f-4a20-9b99-1aa3315c312f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701422 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cef45995-4242-499f-adeb-cc12aa630b5c-etcd-serving-ca\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701542 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cef45995-4242-499f-adeb-cc12aa630b5c-image-import-ca\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701543 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zcwfv"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701597 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44s8w" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701683 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cef45995-4242-499f-adeb-cc12aa630b5c-etcd-client\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701732 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cef45995-4242-499f-adeb-cc12aa630b5c-serving-cert\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701752 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78fe56b7-5ff3-4540-bfda-efeef43859f6-serving-cert\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701862 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/78fe56b7-5ff3-4540-bfda-efeef43859f6-encryption-config\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701886 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea5a99ae-b999-419c-9da0-1333ba6378ea-trusted-ca\") pod \"console-operator-58897d9998-xhn4h\" (UID: \"ea5a99ae-b999-419c-9da0-1333ba6378ea\") " pod="openshift-console-operator/console-operator-58897d9998-xhn4h" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701907 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l2frr\" (UID: \"65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701933 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/78fe56b7-5ff3-4540-bfda-efeef43859f6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.701963 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cef45995-4242-499f-adeb-cc12aa630b5c-audit\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.702046 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.702065 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mq9d\" (UniqueName: \"kubernetes.io/projected/ea5a99ae-b999-419c-9da0-1333ba6378ea-kube-api-access-2mq9d\") pod \"console-operator-58897d9998-xhn4h\" (UID: \"ea5a99ae-b999-419c-9da0-1333ba6378ea\") " pod="openshift-console-operator/console-operator-58897d9998-xhn4h" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.702109 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cef45995-4242-499f-adeb-cc12aa630b5c-node-pullsecrets\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.702129 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/78fe56b7-5ff3-4540-bfda-efeef43859f6-etcd-client\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.702149 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78fe56b7-5ff3-4540-bfda-efeef43859f6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.702170 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46ec327c-832f-4a20-9b99-1aa3315c312f-serving-cert\") pod \"route-controller-manager-6576b87f9c-8nsr4\" (UID: \"46ec327c-832f-4a20-9b99-1aa3315c312f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.702248 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea5a99ae-b999-419c-9da0-1333ba6378ea-serving-cert\") pod \"console-operator-58897d9998-xhn4h\" (UID: \"ea5a99ae-b999-419c-9da0-1333ba6378ea\") " pod="openshift-console-operator/console-operator-58897d9998-xhn4h" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.702268 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cef45995-4242-499f-adeb-cc12aa630b5c-config\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.702307 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78fe56b7-5ff3-4540-bfda-efeef43859f6-audit-policies\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.702529 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ckdj\" (UniqueName: \"kubernetes.io/projected/78fe56b7-5ff3-4540-bfda-efeef43859f6-kube-api-access-5ckdj\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.702255 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.702334 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.703604 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.706663 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.706990 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.707211 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.710669 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.713693 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.713960 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.714064 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.714117 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.714115 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.714443 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.716681 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.717041 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.717319 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-r79xk"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.717712 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.718319 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-85x5w"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.718404 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.718579 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-r79xk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.718671 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.719762 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bgpwd"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.720285 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-5mbhc"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.720357 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bgpwd" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.720369 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85x5w" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.720895 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5mbhc" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.726181 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.726548 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.726800 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.727801 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p58fj"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.728499 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.773500 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.782655 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.783109 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.785631 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.787897 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.785663 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.788538 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.793911 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.793914 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.793963 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hhfbt"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.794386 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p58fj" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.794606 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfk2x"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.794676 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.794977 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.795016 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.794982 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-gkffs"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.795069 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.795793 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ks2hk"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.795859 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.795889 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gkffs" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.795945 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.796065 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.796180 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.796279 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.796383 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.796489 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.796983 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ks2hk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.797540 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.798409 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.798568 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqb7h"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.799122 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l8dg8"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.799513 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8dg8" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.799723 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqb7h" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.800773 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5mbc7"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.802199 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5mbc7" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803237 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mq9d\" (UniqueName: \"kubernetes.io/projected/ea5a99ae-b999-419c-9da0-1333ba6378ea-kube-api-access-2mq9d\") pod \"console-operator-58897d9998-xhn4h\" (UID: \"ea5a99ae-b999-419c-9da0-1333ba6378ea\") " pod="openshift-console-operator/console-operator-58897d9998-xhn4h" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803517 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cef45995-4242-499f-adeb-cc12aa630b5c-node-pullsecrets\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803542 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/78fe56b7-5ff3-4540-bfda-efeef43859f6-etcd-client\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803563 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78fe56b7-5ff3-4540-bfda-efeef43859f6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803593 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803618 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803642 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46ec327c-832f-4a20-9b99-1aa3315c312f-serving-cert\") pod \"route-controller-manager-6576b87f9c-8nsr4\" (UID: \"46ec327c-832f-4a20-9b99-1aa3315c312f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803660 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b18e2d30-da7f-4c5f-9700-20c7c05b1043-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rxlbk\" (UID: \"b18e2d30-da7f-4c5f-9700-20c7c05b1043\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rxlbk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803679 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8d58e84-2299-4ceb-bb86-e5e7a451b3bc-bound-sa-token\") pod \"ingress-operator-5b745b69d9-44s8w\" (UID: \"c8d58e84-2299-4ceb-bb86-e5e7a451b3bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44s8w" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803702 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfg7d\" (UniqueName: \"kubernetes.io/projected/26625d33-dbca-4e3f-97eb-34956096bf8a-kube-api-access-cfg7d\") pod \"openshift-apiserver-operator-796bbdcf4f-hv45d\" (UID: \"26625d33-dbca-4e3f-97eb-34956096bf8a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hv45d" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803720 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f880472d-b13f-4b62-946f-3d74aafe5743-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4hbbw\" (UID: \"f880472d-b13f-4b62-946f-3d74aafe5743\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4hbbw" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803742 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea5a99ae-b999-419c-9da0-1333ba6378ea-serving-cert\") pod \"console-operator-58897d9998-xhn4h\" (UID: \"ea5a99ae-b999-419c-9da0-1333ba6378ea\") " pod="openshift-console-operator/console-operator-58897d9998-xhn4h" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803765 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cef45995-4242-499f-adeb-cc12aa630b5c-config\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803782 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78fe56b7-5ff3-4540-bfda-efeef43859f6-audit-policies\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803799 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ckdj\" (UniqueName: \"kubernetes.io/projected/78fe56b7-5ff3-4540-bfda-efeef43859f6-kube-api-access-5ckdj\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803801 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803831 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803862 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59d3b2e2-c186-4551-b6b6-962b13b3a058-config\") pod \"kube-controller-manager-operator-78b949d7b-5tpbz\" (UID: \"59d3b2e2-c186-4551-b6b6-962b13b3a058\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5tpbz" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803883 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/32a5e4ed-16fd-4922-ac7f-515ea14b4fe5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jqmqz\" (UID: \"32a5e4ed-16fd-4922-ac7f-515ea14b4fe5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jqmqz" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803902 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2944\" (UniqueName: \"kubernetes.io/projected/f880472d-b13f-4b62-946f-3d74aafe5743-kube-api-access-g2944\") pod \"machine-api-operator-5694c8668f-4hbbw\" (UID: \"f880472d-b13f-4b62-946f-3d74aafe5743\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4hbbw" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803921 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/76633ed6-00c1-4c35-aa9c-93c0867d676d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-lrlf2\" (UID: \"76633ed6-00c1-4c35-aa9c-93c0867d676d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lrlf2" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803942 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8481d31f-f701-4821-9893-5ebf45d2dcb8-audit-dir\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803960 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/94eb6425-bdf2-43d1-926e-c94700a985be-console-oauth-config\") pod \"console-f9d7485db-zqrwf\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803979 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f880472d-b13f-4b62-946f-3d74aafe5743-images\") pod \"machine-api-operator-5694c8668f-4hbbw\" (UID: \"f880472d-b13f-4b62-946f-3d74aafe5743\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4hbbw" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.803999 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cef45995-4242-499f-adeb-cc12aa630b5c-encryption-config\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804018 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ada46f99-5088-4a53-b7b6-cc0d93f72412-config\") pod \"authentication-operator-69f744f599-sssvv\" (UID: \"ada46f99-5088-4a53-b7b6-cc0d93f72412\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sssvv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804036 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-l2frr\" (UID: \"65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804053 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ada46f99-5088-4a53-b7b6-cc0d93f72412-service-ca-bundle\") pod \"authentication-operator-69f744f599-sssvv\" (UID: \"ada46f99-5088-4a53-b7b6-cc0d93f72412\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sssvv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804070 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cef45995-4242-499f-adeb-cc12aa630b5c-audit-dir\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804090 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-console-config\") pod \"console-f9d7485db-zqrwf\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804109 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59d3b2e2-c186-4551-b6b6-962b13b3a058-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5tpbz\" (UID: \"59d3b2e2-c186-4551-b6b6-962b13b3a058\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5tpbz" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804128 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw9zf\" (UniqueName: \"kubernetes.io/projected/cef45995-4242-499f-adeb-cc12aa630b5c-kube-api-access-lw9zf\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804147 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsrkr\" (UniqueName: \"kubernetes.io/projected/c8d58e84-2299-4ceb-bb86-e5e7a451b3bc-kube-api-access-qsrkr\") pod \"ingress-operator-5b745b69d9-44s8w\" (UID: \"c8d58e84-2299-4ceb-bb86-e5e7a451b3bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44s8w" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804168 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4hmn\" (UniqueName: \"kubernetes.io/projected/94eb6425-bdf2-43d1-926e-c94700a985be-kube-api-access-v4hmn\") pod \"console-f9d7485db-zqrwf\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804188 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ec327c-832f-4a20-9b99-1aa3315c312f-config\") pod \"route-controller-manager-6576b87f9c-8nsr4\" (UID: \"46ec327c-832f-4a20-9b99-1aa3315c312f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804206 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzzrx\" (UniqueName: \"kubernetes.io/projected/478908d6-765e-4bd8-a3ef-3142a7641a3b-kube-api-access-gzzrx\") pod \"downloads-7954f5f757-hbsbc\" (UID: \"478908d6-765e-4bd8-a3ef-3142a7641a3b\") " pod="openshift-console/downloads-7954f5f757-hbsbc" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804227 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8scc\" (UniqueName: \"kubernetes.io/projected/b18e2d30-da7f-4c5f-9700-20c7c05b1043-kube-api-access-z8scc\") pod \"openshift-controller-manager-operator-756b6f6bc6-rxlbk\" (UID: \"b18e2d30-da7f-4c5f-9700-20c7c05b1043\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rxlbk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804248 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldshw\" (UniqueName: \"kubernetes.io/projected/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-kube-api-access-ldshw\") pod \"controller-manager-879f6c89f-ndbtg\" (UID: \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804266 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78fe56b7-5ff3-4540-bfda-efeef43859f6-audit-dir\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804285 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-client-ca\") pod \"controller-manager-879f6c89f-ndbtg\" (UID: \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804303 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46ec327c-832f-4a20-9b99-1aa3315c312f-client-ca\") pod \"route-controller-manager-6576b87f9c-8nsr4\" (UID: \"46ec327c-832f-4a20-9b99-1aa3315c312f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804323 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-audit-policies\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804343 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32a5e4ed-16fd-4922-ac7f-515ea14b4fe5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jqmqz\" (UID: \"32a5e4ed-16fd-4922-ac7f-515ea14b4fe5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jqmqz" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804366 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-service-ca\") pod \"console-f9d7485db-zqrwf\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804412 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m72hb\" (UniqueName: \"kubernetes.io/projected/ada46f99-5088-4a53-b7b6-cc0d93f72412-kube-api-access-m72hb\") pod \"authentication-operator-69f744f599-sssvv\" (UID: \"ada46f99-5088-4a53-b7b6-cc0d93f72412\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sssvv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804436 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26625d33-dbca-4e3f-97eb-34956096bf8a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hv45d\" (UID: \"26625d33-dbca-4e3f-97eb-34956096bf8a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hv45d" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804455 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f880472d-b13f-4b62-946f-3d74aafe5743-config\") pod \"machine-api-operator-5694c8668f-4hbbw\" (UID: \"f880472d-b13f-4b62-946f-3d74aafe5743\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4hbbw" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804474 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a4a46bed-8781-4e46-a70e-868c24144a1f-machine-approver-tls\") pod \"machine-approver-56656f9798-s4vnk\" (UID: \"a4a46bed-8781-4e46-a70e-868c24144a1f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s4vnk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804501 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b18e2d30-da7f-4c5f-9700-20c7c05b1043-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rxlbk\" (UID: \"b18e2d30-da7f-4c5f-9700-20c7c05b1043\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rxlbk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804520 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804538 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/792e272a-64cd-47cd-8aac-eeb295e49f05-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-86448\" (UID: \"792e272a-64cd-47cd-8aac-eeb295e49f05\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86448" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804558 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz9tx\" (UniqueName: \"kubernetes.io/projected/2f032c11-b3c4-45f7-be15-d5873624adcd-kube-api-access-fz9tx\") pod \"dns-operator-744455d44c-scl78\" (UID: \"2f032c11-b3c4-45f7-be15-d5873624adcd\") " pod="openshift-dns-operator/dns-operator-744455d44c-scl78" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804578 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85z5f\" (UniqueName: \"kubernetes.io/projected/65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2-kube-api-access-85z5f\") pod \"openshift-config-operator-7777fb866f-l2frr\" (UID: \"65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804599 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef45995-4242-499f-adeb-cc12aa630b5c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804619 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea5a99ae-b999-419c-9da0-1333ba6378ea-config\") pod \"console-operator-58897d9998-xhn4h\" (UID: \"ea5a99ae-b999-419c-9da0-1333ba6378ea\") " pod="openshift-console-operator/console-operator-58897d9998-xhn4h" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804637 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26625d33-dbca-4e3f-97eb-34956096bf8a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hv45d\" (UID: \"26625d33-dbca-4e3f-97eb-34956096bf8a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hv45d" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804654 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/792e272a-64cd-47cd-8aac-eeb295e49f05-config\") pod \"kube-apiserver-operator-766d6c64bb-86448\" (UID: \"792e272a-64cd-47cd-8aac-eeb295e49f05\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86448" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804735 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cef45995-4242-499f-adeb-cc12aa630b5c-etcd-serving-ca\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804755 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-config\") pod \"controller-manager-879f6c89f-ndbtg\" (UID: \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804829 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6swl\" (UniqueName: \"kubernetes.io/projected/46ec327c-832f-4a20-9b99-1aa3315c312f-kube-api-access-c6swl\") pod \"route-controller-manager-6576b87f9c-8nsr4\" (UID: \"46ec327c-832f-4a20-9b99-1aa3315c312f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804860 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-trusted-ca-bundle\") pod \"console-f9d7485db-zqrwf\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804883 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cef45995-4242-499f-adeb-cc12aa630b5c-image-import-ca\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804904 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804924 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804944 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/94eb6425-bdf2-43d1-926e-c94700a985be-console-serving-cert\") pod \"console-f9d7485db-zqrwf\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804962 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsvtv\" (UniqueName: \"kubernetes.io/projected/32a5e4ed-16fd-4922-ac7f-515ea14b4fe5-kube-api-access-gsvtv\") pod \"cluster-image-registry-operator-dc59b4c8b-jqmqz\" (UID: \"32a5e4ed-16fd-4922-ac7f-515ea14b4fe5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jqmqz" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.804984 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805001 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4a46bed-8781-4e46-a70e-868c24144a1f-auth-proxy-config\") pod \"machine-approver-56656f9798-s4vnk\" (UID: \"a4a46bed-8781-4e46-a70e-868c24144a1f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s4vnk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805021 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht7zf\" (UniqueName: \"kubernetes.io/projected/a4a46bed-8781-4e46-a70e-868c24144a1f-kube-api-access-ht7zf\") pod \"machine-approver-56656f9798-s4vnk\" (UID: \"a4a46bed-8781-4e46-a70e-868c24144a1f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s4vnk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805043 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cef45995-4242-499f-adeb-cc12aa630b5c-etcd-client\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805062 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805082 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5c25\" (UniqueName: \"kubernetes.io/projected/76633ed6-00c1-4c35-aa9c-93c0867d676d-kube-api-access-z5c25\") pod \"cluster-samples-operator-665b6dd947-lrlf2\" (UID: \"76633ed6-00c1-4c35-aa9c-93c0867d676d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lrlf2" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805098 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59d3b2e2-c186-4551-b6b6-962b13b3a058-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5tpbz\" (UID: \"59d3b2e2-c186-4551-b6b6-962b13b3a058\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5tpbz" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805113 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/792e272a-64cd-47cd-8aac-eeb295e49f05-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-86448\" (UID: \"792e272a-64cd-47cd-8aac-eeb295e49f05\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86448" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805151 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32a5e4ed-16fd-4922-ac7f-515ea14b4fe5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jqmqz\" (UID: \"32a5e4ed-16fd-4922-ac7f-515ea14b4fe5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jqmqz" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805168 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4a46bed-8781-4e46-a70e-868c24144a1f-config\") pod \"machine-approver-56656f9798-s4vnk\" (UID: \"a4a46bed-8781-4e46-a70e-868c24144a1f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s4vnk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805187 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805208 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ada46f99-5088-4a53-b7b6-cc0d93f72412-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-sssvv\" (UID: \"ada46f99-5088-4a53-b7b6-cc0d93f72412\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sssvv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805227 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea5a99ae-b999-419c-9da0-1333ba6378ea-trusted-ca\") pod \"console-operator-58897d9998-xhn4h\" (UID: \"ea5a99ae-b999-419c-9da0-1333ba6378ea\") " pod="openshift-console-operator/console-operator-58897d9998-xhn4h" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805243 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cef45995-4242-499f-adeb-cc12aa630b5c-serving-cert\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805265 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78fe56b7-5ff3-4540-bfda-efeef43859f6-serving-cert\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805281 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/78fe56b7-5ff3-4540-bfda-efeef43859f6-encryption-config\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805299 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805322 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ndbtg\" (UID: \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805341 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f032c11-b3c4-45f7-be15-d5873624adcd-metrics-tls\") pod \"dns-operator-744455d44c-scl78\" (UID: \"2f032c11-b3c4-45f7-be15-d5873624adcd\") " pod="openshift-dns-operator/dns-operator-744455d44c-scl78" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805360 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l2frr\" (UID: \"65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805379 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/78fe56b7-5ff3-4540-bfda-efeef43859f6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805396 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8d58e84-2299-4ceb-bb86-e5e7a451b3bc-trusted-ca\") pod \"ingress-operator-5b745b69d9-44s8w\" (UID: \"c8d58e84-2299-4ceb-bb86-e5e7a451b3bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44s8w" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805414 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-serving-cert\") pod \"controller-manager-879f6c89f-ndbtg\" (UID: \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805434 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c8d58e84-2299-4ceb-bb86-e5e7a451b3bc-metrics-tls\") pod \"ingress-operator-5b745b69d9-44s8w\" (UID: \"c8d58e84-2299-4ceb-bb86-e5e7a451b3bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44s8w" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805452 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-675gh\" (UniqueName: \"kubernetes.io/projected/8481d31f-f701-4821-9893-5ebf45d2dcb8-kube-api-access-675gh\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805469 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cef45995-4242-499f-adeb-cc12aa630b5c-audit\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805489 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805510 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-oauth-serving-cert\") pod \"console-f9d7485db-zqrwf\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805527 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ada46f99-5088-4a53-b7b6-cc0d93f72412-serving-cert\") pod \"authentication-operator-69f744f599-sssvv\" (UID: \"ada46f99-5088-4a53-b7b6-cc0d93f72412\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sssvv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.805887 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.806031 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cef45995-4242-499f-adeb-cc12aa630b5c-node-pullsecrets\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.806609 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.806629 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78fe56b7-5ff3-4540-bfda-efeef43859f6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.806956 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cef45995-4242-499f-adeb-cc12aa630b5c-config\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.807675 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea5a99ae-b999-419c-9da0-1333ba6378ea-config\") pod \"console-operator-58897d9998-xhn4h\" (UID: \"ea5a99ae-b999-419c-9da0-1333ba6378ea\") " pod="openshift-console-operator/console-operator-58897d9998-xhn4h" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.808546 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cef45995-4242-499f-adeb-cc12aa630b5c-etcd-serving-ca\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.809370 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef45995-4242-499f-adeb-cc12aa630b5c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.809503 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cef45995-4242-499f-adeb-cc12aa630b5c-image-import-ca\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.809969 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78fe56b7-5ff3-4540-bfda-efeef43859f6-audit-policies\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.810027 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/78fe56b7-5ff3-4540-bfda-efeef43859f6-etcd-client\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.810622 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.813640 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/78fe56b7-5ff3-4540-bfda-efeef43859f6-encryption-config\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.814460 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cef45995-4242-499f-adeb-cc12aa630b5c-encryption-config\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.815538 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea5a99ae-b999-419c-9da0-1333ba6378ea-trusted-ca\") pod \"console-operator-58897d9998-xhn4h\" (UID: \"ea5a99ae-b999-419c-9da0-1333ba6378ea\") " pod="openshift-console-operator/console-operator-58897d9998-xhn4h" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.816098 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.816152 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tf24j"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.816166 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4blhs"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.816583 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46ec327c-832f-4a20-9b99-1aa3315c312f-serving-cert\") pod \"route-controller-manager-6576b87f9c-8nsr4\" (UID: \"46ec327c-832f-4a20-9b99-1aa3315c312f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.816876 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4blhs" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.817071 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l2frr\" (UID: \"65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.817668 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/78fe56b7-5ff3-4540-bfda-efeef43859f6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.817788 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-l2frr\" (UID: \"65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.817797 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cef45995-4242-499f-adeb-cc12aa630b5c-audit-dir\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.819211 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46ec327c-832f-4a20-9b99-1aa3315c312f-client-ca\") pod \"route-controller-manager-6576b87f9c-8nsr4\" (UID: \"46ec327c-832f-4a20-9b99-1aa3315c312f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.819278 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cef45995-4242-499f-adeb-cc12aa630b5c-audit\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.819647 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cef45995-4242-499f-adeb-cc12aa630b5c-etcd-client\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.819692 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78fe56b7-5ff3-4540-bfda-efeef43859f6-audit-dir\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.822431 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.822521 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-lwzbq"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.823409 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78fe56b7-5ff3-4540-bfda-efeef43859f6-serving-cert\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.823653 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.825195 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xhn4h"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.825221 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hbsbc"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.825323 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lwzbq" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.826303 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cef45995-4242-499f-adeb-cc12aa630b5c-serving-cert\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.826458 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rxlbk"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.827549 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-sssvv"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.832941 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hv45d"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.832993 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lrlf2"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.831321 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea5a99ae-b999-419c-9da0-1333ba6378ea-serving-cert\") pod \"console-operator-58897d9998-xhn4h\" (UID: \"ea5a99ae-b999-419c-9da0-1333ba6378ea\") " pod="openshift-console-operator/console-operator-58897d9998-xhn4h" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.834841 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l2frr"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.834896 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p58fj"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.835345 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5tpbz"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.837006 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.837198 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zqrwf"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.837805 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7rjtn"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.843428 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.844883 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4blhs"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.846171 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-44s8w"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.847747 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.848903 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.850009 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ndbtg"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.851103 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86448"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.853165 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zcwfv"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.855384 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jqmqz"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.857554 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ec327c-832f-4a20-9b99-1aa3315c312f-config\") pod \"route-controller-manager-6576b87f9c-8nsr4\" (UID: \"46ec327c-832f-4a20-9b99-1aa3315c312f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.858032 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bgpwd"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.858090 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-85x5w"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.859286 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l8dg8"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.860028 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.860467 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-scl78"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.861948 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ks2hk"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.863732 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hhfbt"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.864558 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-njxtf"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.870950 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-njxtf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.870766 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-v7vzv"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.875601 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.881076 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.882344 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4hbbw"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.884917 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqb7h"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.888239 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-r79xk"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.890597 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5mbc7"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.892430 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lwzbq"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.894507 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfk2x"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.895932 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-gkffs"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.897675 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.898099 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.900713 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-v7vzv"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.902429 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.902910 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-9bws5"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.904640 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9bws5"] Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.904762 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9bws5" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.907080 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ndbtg\" (UID: \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.907115 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f032c11-b3c4-45f7-be15-d5873624adcd-metrics-tls\") pod \"dns-operator-744455d44c-scl78\" (UID: \"2f032c11-b3c4-45f7-be15-d5873624adcd\") " pod="openshift-dns-operator/dns-operator-744455d44c-scl78" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.907137 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8d58e84-2299-4ceb-bb86-e5e7a451b3bc-trusted-ca\") pod \"ingress-operator-5b745b69d9-44s8w\" (UID: \"c8d58e84-2299-4ceb-bb86-e5e7a451b3bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44s8w" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.907155 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ada46f99-5088-4a53-b7b6-cc0d93f72412-serving-cert\") pod \"authentication-operator-69f744f599-sssvv\" (UID: \"ada46f99-5088-4a53-b7b6-cc0d93f72412\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sssvv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.907183 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcwpk\" (UniqueName: \"kubernetes.io/projected/58f20271-d6bc-42dc-8932-fe80286fecd1-kube-api-access-xcwpk\") pod \"service-ca-9c57cc56f-4blhs\" (UID: \"58f20271-d6bc-42dc-8932-fe80286fecd1\") " pod="openshift-service-ca/service-ca-9c57cc56f-4blhs" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.907509 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.907551 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.907578 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8d58e84-2299-4ceb-bb86-e5e7a451b3bc-bound-sa-token\") pod \"ingress-operator-5b745b69d9-44s8w\" (UID: \"c8d58e84-2299-4ceb-bb86-e5e7a451b3bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44s8w" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.907621 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59d3b2e2-c186-4551-b6b6-962b13b3a058-config\") pod \"kube-controller-manager-operator-78b949d7b-5tpbz\" (UID: \"59d3b2e2-c186-4551-b6b6-962b13b3a058\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5tpbz" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.907649 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8481d31f-f701-4821-9893-5ebf45d2dcb8-audit-dir\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.907669 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f880472d-b13f-4b62-946f-3d74aafe5743-images\") pod \"machine-api-operator-5694c8668f-4hbbw\" (UID: \"f880472d-b13f-4b62-946f-3d74aafe5743\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4hbbw" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.907695 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/199bb9ad-0a44-4631-995f-c4ef6809cd54-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-r79xk\" (UID: \"199bb9ad-0a44-4631-995f-c4ef6809cd54\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r79xk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.907723 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dddac23a-b546-4121-85df-7475aa7c5801-config\") pod \"etcd-operator-b45778765-zcwfv\" (UID: \"dddac23a-b546-4121-85df-7475aa7c5801\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.907751 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dddac23a-b546-4121-85df-7475aa7c5801-etcd-client\") pod \"etcd-operator-b45778765-zcwfv\" (UID: \"dddac23a-b546-4121-85df-7475aa7c5801\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.907778 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kgsk\" (UniqueName: \"kubernetes.io/projected/6086ad74-5d02-4181-bb34-8c116409de42-kube-api-access-6kgsk\") pod \"collect-profiles-29491920-s88fm\" (UID: \"6086ad74-5d02-4181-bb34-8c116409de42\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.907828 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b5c47589-d94b-44fb-b31f-1f4045ea9e3c-images\") pod \"machine-config-operator-74547568cd-85x5w\" (UID: \"b5c47589-d94b-44fb-b31f-1f4045ea9e3c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85x5w" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.907859 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldshw\" (UniqueName: \"kubernetes.io/projected/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-kube-api-access-ldshw\") pod \"controller-manager-879f6c89f-ndbtg\" (UID: \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.907891 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-client-ca\") pod \"controller-manager-879f6c89f-ndbtg\" (UID: \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908112 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkn95\" (UniqueName: \"kubernetes.io/projected/199bb9ad-0a44-4631-995f-c4ef6809cd54-kube-api-access-xkn95\") pod \"multus-admission-controller-857f4d67dd-r79xk\" (UID: \"199bb9ad-0a44-4631-995f-c4ef6809cd54\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r79xk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908152 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkhsp\" (UniqueName: \"kubernetes.io/projected/97d3d9df-e52f-4eb3-8034-c5ace5c23da3-kube-api-access-rkhsp\") pod \"packageserver-d55dfcdfc-7xjzm\" (UID: \"97d3d9df-e52f-4eb3-8034-c5ace5c23da3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908179 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/dddac23a-b546-4121-85df-7475aa7c5801-etcd-ca\") pod \"etcd-operator-b45778765-zcwfv\" (UID: \"dddac23a-b546-4121-85df-7475aa7c5801\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908197 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8481d31f-f701-4821-9893-5ebf45d2dcb8-audit-dir\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908205 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-audit-policies\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908239 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32a5e4ed-16fd-4922-ac7f-515ea14b4fe5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jqmqz\" (UID: \"32a5e4ed-16fd-4922-ac7f-515ea14b4fe5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jqmqz" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908262 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5cd78bf5-69f3-4074-9dea-c7a459de6d4d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jfk2x\" (UID: \"5cd78bf5-69f3-4074-9dea-c7a459de6d4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908292 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-service-ca\") pod \"console-f9d7485db-zqrwf\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908308 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m72hb\" (UniqueName: \"kubernetes.io/projected/ada46f99-5088-4a53-b7b6-cc0d93f72412-kube-api-access-m72hb\") pod \"authentication-operator-69f744f599-sssvv\" (UID: \"ada46f99-5088-4a53-b7b6-cc0d93f72412\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sssvv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908325 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f880472d-b13f-4b62-946f-3d74aafe5743-config\") pod \"machine-api-operator-5694c8668f-4hbbw\" (UID: \"f880472d-b13f-4b62-946f-3d74aafe5743\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4hbbw" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908343 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a4a46bed-8781-4e46-a70e-868c24144a1f-machine-approver-tls\") pod \"machine-approver-56656f9798-s4vnk\" (UID: \"a4a46bed-8781-4e46-a70e-868c24144a1f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s4vnk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908360 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz9tx\" (UniqueName: \"kubernetes.io/projected/2f032c11-b3c4-45f7-be15-d5873624adcd-kube-api-access-fz9tx\") pod \"dns-operator-744455d44c-scl78\" (UID: \"2f032c11-b3c4-45f7-be15-d5873624adcd\") " pod="openshift-dns-operator/dns-operator-744455d44c-scl78" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908387 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908409 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26625d33-dbca-4e3f-97eb-34956096bf8a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hv45d\" (UID: \"26625d33-dbca-4e3f-97eb-34956096bf8a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hv45d" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908429 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/58f20271-d6bc-42dc-8932-fe80286fecd1-signing-cabundle\") pod \"service-ca-9c57cc56f-4blhs\" (UID: \"58f20271-d6bc-42dc-8932-fe80286fecd1\") " pod="openshift-service-ca/service-ca-9c57cc56f-4blhs" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908447 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cd78bf5-69f3-4074-9dea-c7a459de6d4d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jfk2x\" (UID: \"5cd78bf5-69f3-4074-9dea-c7a459de6d4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908468 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b98945dd-f382-4fef-97b6-9037edd2bd9f-serving-cert\") pod \"service-ca-operator-777779d784-l8dg8\" (UID: \"b98945dd-f382-4fef-97b6-9037edd2bd9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8dg8" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908485 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/dddac23a-b546-4121-85df-7475aa7c5801-etcd-service-ca\") pod \"etcd-operator-b45778765-zcwfv\" (UID: \"dddac23a-b546-4121-85df-7475aa7c5801\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908504 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-config\") pod \"controller-manager-879f6c89f-ndbtg\" (UID: \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908523 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6086ad74-5d02-4181-bb34-8c116409de42-config-volume\") pod \"collect-profiles-29491920-s88fm\" (UID: \"6086ad74-5d02-4181-bb34-8c116409de42\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908542 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7f10ec2-24a7-445e-8ae2-49da5ad6cf71-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5mbc7\" (UID: \"f7f10ec2-24a7-445e-8ae2-49da5ad6cf71\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5mbc7" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908570 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908589 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsvtv\" (UniqueName: \"kubernetes.io/projected/32a5e4ed-16fd-4922-ac7f-515ea14b4fe5-kube-api-access-gsvtv\") pod \"cluster-image-registry-operator-dc59b4c8b-jqmqz\" (UID: \"32a5e4ed-16fd-4922-ac7f-515ea14b4fe5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jqmqz" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908609 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b5c47589-d94b-44fb-b31f-1f4045ea9e3c-proxy-tls\") pod \"machine-config-operator-74547568cd-85x5w\" (UID: \"b5c47589-d94b-44fb-b31f-1f4045ea9e3c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85x5w" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908627 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/97d3d9df-e52f-4eb3-8034-c5ace5c23da3-tmpfs\") pod \"packageserver-d55dfcdfc-7xjzm\" (UID: \"97d3d9df-e52f-4eb3-8034-c5ace5c23da3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908648 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4a46bed-8781-4e46-a70e-868c24144a1f-auth-proxy-config\") pod \"machine-approver-56656f9798-s4vnk\" (UID: \"a4a46bed-8781-4e46-a70e-868c24144a1f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s4vnk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908669 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5c25\" (UniqueName: \"kubernetes.io/projected/76633ed6-00c1-4c35-aa9c-93c0867d676d-kube-api-access-z5c25\") pod \"cluster-samples-operator-665b6dd947-lrlf2\" (UID: \"76633ed6-00c1-4c35-aa9c-93c0867d676d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lrlf2" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908691 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59d3b2e2-c186-4551-b6b6-962b13b3a058-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5tpbz\" (UID: \"59d3b2e2-c186-4551-b6b6-962b13b3a058\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5tpbz" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908714 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/792e272a-64cd-47cd-8aac-eeb295e49f05-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-86448\" (UID: \"792e272a-64cd-47cd-8aac-eeb295e49f05\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86448" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908733 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtr8q\" (UniqueName: \"kubernetes.io/projected/7d3ec1e8-113d-4dbd-9a48-6daea4c74cdf-kube-api-access-mtr8q\") pod \"migrator-59844c95c7-ks2hk\" (UID: \"7d3ec1e8-113d-4dbd-9a48-6daea4c74cdf\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ks2hk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908751 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71fd83ec-fa99-4caa-a216-1f1bb2be9251-service-ca-bundle\") pod \"router-default-5444994796-5mbhc\" (UID: \"71fd83ec-fa99-4caa-a216-1f1bb2be9251\") " pod="openshift-ingress/router-default-5444994796-5mbhc" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908801 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32a5e4ed-16fd-4922-ac7f-515ea14b4fe5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jqmqz\" (UID: \"32a5e4ed-16fd-4922-ac7f-515ea14b4fe5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jqmqz" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908942 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4a46bed-8781-4e46-a70e-868c24144a1f-config\") pod \"machine-approver-56656f9798-s4vnk\" (UID: \"a4a46bed-8781-4e46-a70e-868c24144a1f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s4vnk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908966 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.908986 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ada46f99-5088-4a53-b7b6-cc0d93f72412-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-sssvv\" (UID: \"ada46f99-5088-4a53-b7b6-cc0d93f72412\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sssvv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.909003 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6086ad74-5d02-4181-bb34-8c116409de42-secret-volume\") pod \"collect-profiles-29491920-s88fm\" (UID: \"6086ad74-5d02-4181-bb34-8c116409de42\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.909036 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.909054 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97d3d9df-e52f-4eb3-8034-c5ace5c23da3-apiservice-cert\") pod \"packageserver-d55dfcdfc-7xjzm\" (UID: \"97d3d9df-e52f-4eb3-8034-c5ace5c23da3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.909085 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-serving-cert\") pod \"controller-manager-879f6c89f-ndbtg\" (UID: \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.909104 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/71fd83ec-fa99-4caa-a216-1f1bb2be9251-stats-auth\") pod \"router-default-5444994796-5mbhc\" (UID: \"71fd83ec-fa99-4caa-a216-1f1bb2be9251\") " pod="openshift-ingress/router-default-5444994796-5mbhc" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.909125 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c8d58e84-2299-4ceb-bb86-e5e7a451b3bc-metrics-tls\") pod \"ingress-operator-5b745b69d9-44s8w\" (UID: \"c8d58e84-2299-4ceb-bb86-e5e7a451b3bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44s8w" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.909145 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-675gh\" (UniqueName: \"kubernetes.io/projected/8481d31f-f701-4821-9893-5ebf45d2dcb8-kube-api-access-675gh\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.909172 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.909200 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-oauth-serving-cert\") pod \"console-f9d7485db-zqrwf\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.909229 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl5bj\" (UniqueName: \"kubernetes.io/projected/b98945dd-f382-4fef-97b6-9037edd2bd9f-kube-api-access-xl5bj\") pod \"service-ca-operator-777779d784-l8dg8\" (UID: \"b98945dd-f382-4fef-97b6-9037edd2bd9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8dg8" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.909261 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dddac23a-b546-4121-85df-7475aa7c5801-serving-cert\") pod \"etcd-operator-b45778765-zcwfv\" (UID: \"dddac23a-b546-4121-85df-7475aa7c5801\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.909287 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/58f20271-d6bc-42dc-8932-fe80286fecd1-signing-key\") pod \"service-ca-9c57cc56f-4blhs\" (UID: \"58f20271-d6bc-42dc-8932-fe80286fecd1\") " pod="openshift-service-ca/service-ca-9c57cc56f-4blhs" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.909311 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6vs7\" (UniqueName: \"kubernetes.io/projected/dddac23a-b546-4121-85df-7475aa7c5801-kube-api-access-j6vs7\") pod \"etcd-operator-b45778765-zcwfv\" (UID: \"dddac23a-b546-4121-85df-7475aa7c5801\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.909363 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b18e2d30-da7f-4c5f-9700-20c7c05b1043-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rxlbk\" (UID: \"b18e2d30-da7f-4c5f-9700-20c7c05b1043\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rxlbk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.909394 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfg7d\" (UniqueName: \"kubernetes.io/projected/26625d33-dbca-4e3f-97eb-34956096bf8a-kube-api-access-cfg7d\") pod \"openshift-apiserver-operator-796bbdcf4f-hv45d\" (UID: \"26625d33-dbca-4e3f-97eb-34956096bf8a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hv45d" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.909419 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f880472d-b13f-4b62-946f-3d74aafe5743-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4hbbw\" (UID: \"f880472d-b13f-4b62-946f-3d74aafe5743\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4hbbw" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.909457 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97d3d9df-e52f-4eb3-8034-c5ace5c23da3-webhook-cert\") pod \"packageserver-d55dfcdfc-7xjzm\" (UID: \"97d3d9df-e52f-4eb3-8034-c5ace5c23da3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.909591 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-audit-policies\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.909350 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59d3b2e2-c186-4551-b6b6-962b13b3a058-config\") pod \"kube-controller-manager-operator-78b949d7b-5tpbz\" (UID: \"59d3b2e2-c186-4551-b6b6-962b13b3a058\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5tpbz" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.910830 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.910876 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/76633ed6-00c1-4c35-aa9c-93c0867d676d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-lrlf2\" (UID: \"76633ed6-00c1-4c35-aa9c-93c0867d676d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lrlf2" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.910948 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/32a5e4ed-16fd-4922-ac7f-515ea14b4fe5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jqmqz\" (UID: \"32a5e4ed-16fd-4922-ac7f-515ea14b4fe5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jqmqz" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.910973 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2944\" (UniqueName: \"kubernetes.io/projected/f880472d-b13f-4b62-946f-3d74aafe5743-kube-api-access-g2944\") pod \"machine-api-operator-5694c8668f-4hbbw\" (UID: \"f880472d-b13f-4b62-946f-3d74aafe5743\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4hbbw" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.910997 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/71fd83ec-fa99-4caa-a216-1f1bb2be9251-default-certificate\") pod \"router-default-5444994796-5mbhc\" (UID: \"71fd83ec-fa99-4caa-a216-1f1bb2be9251\") " pod="openshift-ingress/router-default-5444994796-5mbhc" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.911014 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4a46bed-8781-4e46-a70e-868c24144a1f-auth-proxy-config\") pod \"machine-approver-56656f9798-s4vnk\" (UID: \"a4a46bed-8781-4e46-a70e-868c24144a1f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s4vnk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.911023 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/94eb6425-bdf2-43d1-926e-c94700a985be-console-oauth-config\") pod \"console-f9d7485db-zqrwf\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.911123 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32a5e4ed-16fd-4922-ac7f-515ea14b4fe5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jqmqz\" (UID: \"32a5e4ed-16fd-4922-ac7f-515ea14b4fe5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jqmqz" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.911331 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ada46f99-5088-4a53-b7b6-cc0d93f72412-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-sssvv\" (UID: \"ada46f99-5088-4a53-b7b6-cc0d93f72412\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sssvv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.911395 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpm6x\" (UniqueName: \"kubernetes.io/projected/b5c47589-d94b-44fb-b31f-1f4045ea9e3c-kube-api-access-rpm6x\") pod \"machine-config-operator-74547568cd-85x5w\" (UID: \"b5c47589-d94b-44fb-b31f-1f4045ea9e3c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85x5w" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.911435 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ada46f99-5088-4a53-b7b6-cc0d93f72412-config\") pod \"authentication-operator-69f744f599-sssvv\" (UID: \"ada46f99-5088-4a53-b7b6-cc0d93f72412\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sssvv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.911548 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ada46f99-5088-4a53-b7b6-cc0d93f72412-service-ca-bundle\") pod \"authentication-operator-69f744f599-sssvv\" (UID: \"ada46f99-5088-4a53-b7b6-cc0d93f72412\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sssvv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.911593 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b18e2d30-da7f-4c5f-9700-20c7c05b1043-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rxlbk\" (UID: \"b18e2d30-da7f-4c5f-9700-20c7c05b1043\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rxlbk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.911598 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv87l\" (UniqueName: \"kubernetes.io/projected/f7f10ec2-24a7-445e-8ae2-49da5ad6cf71-kube-api-access-wv87l\") pod \"package-server-manager-789f6589d5-5mbc7\" (UID: \"f7f10ec2-24a7-445e-8ae2-49da5ad6cf71\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5mbc7" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.911645 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7942l\" (UniqueName: \"kubernetes.io/projected/71fd83ec-fa99-4caa-a216-1f1bb2be9251-kube-api-access-7942l\") pod \"router-default-5444994796-5mbhc\" (UID: \"71fd83ec-fa99-4caa-a216-1f1bb2be9251\") " pod="openshift-ingress/router-default-5444994796-5mbhc" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.912131 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ada46f99-5088-4a53-b7b6-cc0d93f72412-serving-cert\") pod \"authentication-operator-69f744f599-sssvv\" (UID: \"ada46f99-5088-4a53-b7b6-cc0d93f72412\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sssvv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.912147 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ada46f99-5088-4a53-b7b6-cc0d93f72412-config\") pod \"authentication-operator-69f744f599-sssvv\" (UID: \"ada46f99-5088-4a53-b7b6-cc0d93f72412\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sssvv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.912270 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-console-config\") pod \"console-f9d7485db-zqrwf\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.912290 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ada46f99-5088-4a53-b7b6-cc0d93f72412-service-ca-bundle\") pod \"authentication-operator-69f744f599-sssvv\" (UID: \"ada46f99-5088-4a53-b7b6-cc0d93f72412\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sssvv" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.912328 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59d3b2e2-c186-4551-b6b6-962b13b3a058-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5tpbz\" (UID: \"59d3b2e2-c186-4551-b6b6-962b13b3a058\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5tpbz" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.912353 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkwbv\" (UniqueName: \"kubernetes.io/projected/5cd78bf5-69f3-4074-9dea-c7a459de6d4d-kube-api-access-zkwbv\") pod \"marketplace-operator-79b997595-jfk2x\" (UID: \"5cd78bf5-69f3-4074-9dea-c7a459de6d4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.913172 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a4a46bed-8781-4e46-a70e-868c24144a1f-machine-approver-tls\") pod \"machine-approver-56656f9798-s4vnk\" (UID: \"a4a46bed-8781-4e46-a70e-868c24144a1f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s4vnk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.913241 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsrkr\" (UniqueName: \"kubernetes.io/projected/c8d58e84-2299-4ceb-bb86-e5e7a451b3bc-kube-api-access-qsrkr\") pod \"ingress-operator-5b745b69d9-44s8w\" (UID: \"c8d58e84-2299-4ceb-bb86-e5e7a451b3bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44s8w" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.913288 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4hmn\" (UniqueName: \"kubernetes.io/projected/94eb6425-bdf2-43d1-926e-c94700a985be-kube-api-access-v4hmn\") pod \"console-f9d7485db-zqrwf\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.913436 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8scc\" (UniqueName: \"kubernetes.io/projected/b18e2d30-da7f-4c5f-9700-20c7c05b1043-kube-api-access-z8scc\") pod \"openshift-controller-manager-operator-756b6f6bc6-rxlbk\" (UID: \"b18e2d30-da7f-4c5f-9700-20c7c05b1043\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rxlbk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.913580 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26625d33-dbca-4e3f-97eb-34956096bf8a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hv45d\" (UID: \"26625d33-dbca-4e3f-97eb-34956096bf8a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hv45d" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.914230 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-service-ca\") pod \"console-f9d7485db-zqrwf\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.914371 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26625d33-dbca-4e3f-97eb-34956096bf8a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hv45d\" (UID: \"26625d33-dbca-4e3f-97eb-34956096bf8a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hv45d" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.914429 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b18e2d30-da7f-4c5f-9700-20c7c05b1043-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rxlbk\" (UID: \"b18e2d30-da7f-4c5f-9700-20c7c05b1043\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rxlbk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.914482 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/792e272a-64cd-47cd-8aac-eeb295e49f05-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-86448\" (UID: \"792e272a-64cd-47cd-8aac-eeb295e49f05\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86448" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.914522 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b5c47589-d94b-44fb-b31f-1f4045ea9e3c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-85x5w\" (UID: \"b5c47589-d94b-44fb-b31f-1f4045ea9e3c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85x5w" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.914573 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71fd83ec-fa99-4caa-a216-1f1bb2be9251-metrics-certs\") pod \"router-default-5444994796-5mbhc\" (UID: \"71fd83ec-fa99-4caa-a216-1f1bb2be9251\") " pod="openshift-ingress/router-default-5444994796-5mbhc" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.914766 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/792e272a-64cd-47cd-8aac-eeb295e49f05-config\") pod \"kube-apiserver-operator-766d6c64bb-86448\" (UID: \"792e272a-64cd-47cd-8aac-eeb295e49f05\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86448" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.914913 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/76633ed6-00c1-4c35-aa9c-93c0867d676d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-lrlf2\" (UID: \"76633ed6-00c1-4c35-aa9c-93c0867d676d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lrlf2" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.915346 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-trusted-ca-bundle\") pod \"console-f9d7485db-zqrwf\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.915421 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.915476 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/94eb6425-bdf2-43d1-926e-c94700a985be-console-serving-cert\") pod \"console-f9d7485db-zqrwf\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.915683 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-console-config\") pod \"console-f9d7485db-zqrwf\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.915967 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.917480 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht7zf\" (UniqueName: \"kubernetes.io/projected/a4a46bed-8781-4e46-a70e-868c24144a1f-kube-api-access-ht7zf\") pod \"machine-approver-56656f9798-s4vnk\" (UID: \"a4a46bed-8781-4e46-a70e-868c24144a1f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s4vnk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.917532 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.917559 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b98945dd-f382-4fef-97b6-9037edd2bd9f-config\") pod \"service-ca-operator-777779d784-l8dg8\" (UID: \"b98945dd-f382-4fef-97b6-9037edd2bd9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8dg8" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.916443 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.916449 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-oauth-serving-cert\") pod \"console-f9d7485db-zqrwf\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.917712 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-trusted-ca-bundle\") pod \"console-f9d7485db-zqrwf\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.917053 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/792e272a-64cd-47cd-8aac-eeb295e49f05-config\") pod \"kube-apiserver-operator-766d6c64bb-86448\" (UID: \"792e272a-64cd-47cd-8aac-eeb295e49f05\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86448" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.915987 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f032c11-b3c4-45f7-be15-d5873624adcd-metrics-tls\") pod \"dns-operator-744455d44c-scl78\" (UID: \"2f032c11-b3c4-45f7-be15-d5873624adcd\") " pod="openshift-dns-operator/dns-operator-744455d44c-scl78" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.916417 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59d3b2e2-c186-4551-b6b6-962b13b3a058-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5tpbz\" (UID: \"59d3b2e2-c186-4551-b6b6-962b13b3a058\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5tpbz" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.916317 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f880472d-b13f-4b62-946f-3d74aafe5743-images\") pod \"machine-api-operator-5694c8668f-4hbbw\" (UID: \"f880472d-b13f-4b62-946f-3d74aafe5743\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4hbbw" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.918273 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.918360 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26625d33-dbca-4e3f-97eb-34956096bf8a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hv45d\" (UID: \"26625d33-dbca-4e3f-97eb-34956096bf8a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hv45d" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.918549 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-client-ca\") pod \"controller-manager-879f6c89f-ndbtg\" (UID: \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.918947 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b18e2d30-da7f-4c5f-9700-20c7c05b1043-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rxlbk\" (UID: \"b18e2d30-da7f-4c5f-9700-20c7c05b1043\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rxlbk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.918958 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/792e272a-64cd-47cd-8aac-eeb295e49f05-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-86448\" (UID: \"792e272a-64cd-47cd-8aac-eeb295e49f05\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86448" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.919416 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4a46bed-8781-4e46-a70e-868c24144a1f-config\") pod \"machine-approver-56656f9798-s4vnk\" (UID: \"a4a46bed-8781-4e46-a70e-868c24144a1f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s4vnk" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.919625 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ndbtg\" (UID: \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.920050 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/32a5e4ed-16fd-4922-ac7f-515ea14b4fe5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jqmqz\" (UID: \"32a5e4ed-16fd-4922-ac7f-515ea14b4fe5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jqmqz" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.920565 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-serving-cert\") pod \"controller-manager-879f6c89f-ndbtg\" (UID: \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.920679 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f880472d-b13f-4b62-946f-3d74aafe5743-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4hbbw\" (UID: \"f880472d-b13f-4b62-946f-3d74aafe5743\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4hbbw" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.921214 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f880472d-b13f-4b62-946f-3d74aafe5743-config\") pod \"machine-api-operator-5694c8668f-4hbbw\" (UID: \"f880472d-b13f-4b62-946f-3d74aafe5743\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4hbbw" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.921302 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/94eb6425-bdf2-43d1-926e-c94700a985be-console-oauth-config\") pod \"console-f9d7485db-zqrwf\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.921477 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-config\") pod \"controller-manager-879f6c89f-ndbtg\" (UID: \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.922258 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/94eb6425-bdf2-43d1-926e-c94700a985be-console-serving-cert\") pod \"console-f9d7485db-zqrwf\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.939686 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.977582 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.984409 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c8d58e84-2299-4ceb-bb86-e5e7a451b3bc-metrics-tls\") pod \"ingress-operator-5b745b69d9-44s8w\" (UID: \"c8d58e84-2299-4ceb-bb86-e5e7a451b3bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44s8w" Jan 27 12:14:14 crc kubenswrapper[4745]: I0127 12:14:14.998458 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.006153 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.006576 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.006711 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.007033 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.007249 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.006995 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.009479 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018194 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/199bb9ad-0a44-4631-995f-c4ef6809cd54-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-r79xk\" (UID: \"199bb9ad-0a44-4631-995f-c4ef6809cd54\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r79xk" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018235 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dddac23a-b546-4121-85df-7475aa7c5801-config\") pod \"etcd-operator-b45778765-zcwfv\" (UID: \"dddac23a-b546-4121-85df-7475aa7c5801\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018254 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dddac23a-b546-4121-85df-7475aa7c5801-etcd-client\") pod \"etcd-operator-b45778765-zcwfv\" (UID: \"dddac23a-b546-4121-85df-7475aa7c5801\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018272 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b5c47589-d94b-44fb-b31f-1f4045ea9e3c-images\") pod \"machine-config-operator-74547568cd-85x5w\" (UID: \"b5c47589-d94b-44fb-b31f-1f4045ea9e3c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85x5w" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018289 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgsk\" (UniqueName: \"kubernetes.io/projected/6086ad74-5d02-4181-bb34-8c116409de42-kube-api-access-6kgsk\") pod \"collect-profiles-29491920-s88fm\" (UID: \"6086ad74-5d02-4181-bb34-8c116409de42\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018310 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkn95\" (UniqueName: \"kubernetes.io/projected/199bb9ad-0a44-4631-995f-c4ef6809cd54-kube-api-access-xkn95\") pod \"multus-admission-controller-857f4d67dd-r79xk\" (UID: \"199bb9ad-0a44-4631-995f-c4ef6809cd54\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r79xk" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018326 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkhsp\" (UniqueName: \"kubernetes.io/projected/97d3d9df-e52f-4eb3-8034-c5ace5c23da3-kube-api-access-rkhsp\") pod \"packageserver-d55dfcdfc-7xjzm\" (UID: \"97d3d9df-e52f-4eb3-8034-c5ace5c23da3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018341 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/dddac23a-b546-4121-85df-7475aa7c5801-etcd-ca\") pod \"etcd-operator-b45778765-zcwfv\" (UID: \"dddac23a-b546-4121-85df-7475aa7c5801\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018360 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5cd78bf5-69f3-4074-9dea-c7a459de6d4d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jfk2x\" (UID: \"5cd78bf5-69f3-4074-9dea-c7a459de6d4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018393 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/58f20271-d6bc-42dc-8932-fe80286fecd1-signing-cabundle\") pod \"service-ca-9c57cc56f-4blhs\" (UID: \"58f20271-d6bc-42dc-8932-fe80286fecd1\") " pod="openshift-service-ca/service-ca-9c57cc56f-4blhs" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018412 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cd78bf5-69f3-4074-9dea-c7a459de6d4d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jfk2x\" (UID: \"5cd78bf5-69f3-4074-9dea-c7a459de6d4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018427 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b98945dd-f382-4fef-97b6-9037edd2bd9f-serving-cert\") pod \"service-ca-operator-777779d784-l8dg8\" (UID: \"b98945dd-f382-4fef-97b6-9037edd2bd9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8dg8" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018441 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/dddac23a-b546-4121-85df-7475aa7c5801-etcd-service-ca\") pod \"etcd-operator-b45778765-zcwfv\" (UID: \"dddac23a-b546-4121-85df-7475aa7c5801\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018469 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6086ad74-5d02-4181-bb34-8c116409de42-config-volume\") pod \"collect-profiles-29491920-s88fm\" (UID: \"6086ad74-5d02-4181-bb34-8c116409de42\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018492 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7f10ec2-24a7-445e-8ae2-49da5ad6cf71-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5mbc7\" (UID: \"f7f10ec2-24a7-445e-8ae2-49da5ad6cf71\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5mbc7" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018521 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b5c47589-d94b-44fb-b31f-1f4045ea9e3c-proxy-tls\") pod \"machine-config-operator-74547568cd-85x5w\" (UID: \"b5c47589-d94b-44fb-b31f-1f4045ea9e3c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85x5w" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018543 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/97d3d9df-e52f-4eb3-8034-c5ace5c23da3-tmpfs\") pod \"packageserver-d55dfcdfc-7xjzm\" (UID: \"97d3d9df-e52f-4eb3-8034-c5ace5c23da3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018583 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtr8q\" (UniqueName: \"kubernetes.io/projected/7d3ec1e8-113d-4dbd-9a48-6daea4c74cdf-kube-api-access-mtr8q\") pod \"migrator-59844c95c7-ks2hk\" (UID: \"7d3ec1e8-113d-4dbd-9a48-6daea4c74cdf\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ks2hk" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018607 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71fd83ec-fa99-4caa-a216-1f1bb2be9251-service-ca-bundle\") pod \"router-default-5444994796-5mbhc\" (UID: \"71fd83ec-fa99-4caa-a216-1f1bb2be9251\") " pod="openshift-ingress/router-default-5444994796-5mbhc" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018629 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6086ad74-5d02-4181-bb34-8c116409de42-secret-volume\") pod \"collect-profiles-29491920-s88fm\" (UID: \"6086ad74-5d02-4181-bb34-8c116409de42\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018653 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97d3d9df-e52f-4eb3-8034-c5ace5c23da3-apiservice-cert\") pod \"packageserver-d55dfcdfc-7xjzm\" (UID: \"97d3d9df-e52f-4eb3-8034-c5ace5c23da3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018675 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/71fd83ec-fa99-4caa-a216-1f1bb2be9251-stats-auth\") pod \"router-default-5444994796-5mbhc\" (UID: \"71fd83ec-fa99-4caa-a216-1f1bb2be9251\") " pod="openshift-ingress/router-default-5444994796-5mbhc" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018711 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl5bj\" (UniqueName: \"kubernetes.io/projected/b98945dd-f382-4fef-97b6-9037edd2bd9f-kube-api-access-xl5bj\") pod \"service-ca-operator-777779d784-l8dg8\" (UID: \"b98945dd-f382-4fef-97b6-9037edd2bd9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8dg8" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018732 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dddac23a-b546-4121-85df-7475aa7c5801-serving-cert\") pod \"etcd-operator-b45778765-zcwfv\" (UID: \"dddac23a-b546-4121-85df-7475aa7c5801\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018756 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/58f20271-d6bc-42dc-8932-fe80286fecd1-signing-key\") pod \"service-ca-9c57cc56f-4blhs\" (UID: \"58f20271-d6bc-42dc-8932-fe80286fecd1\") " pod="openshift-service-ca/service-ca-9c57cc56f-4blhs" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018784 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6vs7\" (UniqueName: \"kubernetes.io/projected/dddac23a-b546-4121-85df-7475aa7c5801-kube-api-access-j6vs7\") pod \"etcd-operator-b45778765-zcwfv\" (UID: \"dddac23a-b546-4121-85df-7475aa7c5801\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018844 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97d3d9df-e52f-4eb3-8034-c5ace5c23da3-webhook-cert\") pod \"packageserver-d55dfcdfc-7xjzm\" (UID: \"97d3d9df-e52f-4eb3-8034-c5ace5c23da3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018885 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/71fd83ec-fa99-4caa-a216-1f1bb2be9251-default-certificate\") pod \"router-default-5444994796-5mbhc\" (UID: \"71fd83ec-fa99-4caa-a216-1f1bb2be9251\") " pod="openshift-ingress/router-default-5444994796-5mbhc" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018911 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpm6x\" (UniqueName: \"kubernetes.io/projected/b5c47589-d94b-44fb-b31f-1f4045ea9e3c-kube-api-access-rpm6x\") pod \"machine-config-operator-74547568cd-85x5w\" (UID: \"b5c47589-d94b-44fb-b31f-1f4045ea9e3c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85x5w" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018936 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv87l\" (UniqueName: \"kubernetes.io/projected/f7f10ec2-24a7-445e-8ae2-49da5ad6cf71-kube-api-access-wv87l\") pod \"package-server-manager-789f6589d5-5mbc7\" (UID: \"f7f10ec2-24a7-445e-8ae2-49da5ad6cf71\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5mbc7" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018958 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7942l\" (UniqueName: \"kubernetes.io/projected/71fd83ec-fa99-4caa-a216-1f1bb2be9251-kube-api-access-7942l\") pod \"router-default-5444994796-5mbhc\" (UID: \"71fd83ec-fa99-4caa-a216-1f1bb2be9251\") " pod="openshift-ingress/router-default-5444994796-5mbhc" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.018990 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkwbv\" (UniqueName: \"kubernetes.io/projected/5cd78bf5-69f3-4074-9dea-c7a459de6d4d-kube-api-access-zkwbv\") pod \"marketplace-operator-79b997595-jfk2x\" (UID: \"5cd78bf5-69f3-4074-9dea-c7a459de6d4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.019060 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b5c47589-d94b-44fb-b31f-1f4045ea9e3c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-85x5w\" (UID: \"b5c47589-d94b-44fb-b31f-1f4045ea9e3c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85x5w" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.019084 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71fd83ec-fa99-4caa-a216-1f1bb2be9251-metrics-certs\") pod \"router-default-5444994796-5mbhc\" (UID: \"71fd83ec-fa99-4caa-a216-1f1bb2be9251\") " pod="openshift-ingress/router-default-5444994796-5mbhc" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.019162 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b98945dd-f382-4fef-97b6-9037edd2bd9f-config\") pod \"service-ca-operator-777779d784-l8dg8\" (UID: \"b98945dd-f382-4fef-97b6-9037edd2bd9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8dg8" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.019197 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcwpk\" (UniqueName: \"kubernetes.io/projected/58f20271-d6bc-42dc-8932-fe80286fecd1-kube-api-access-xcwpk\") pod \"service-ca-9c57cc56f-4blhs\" (UID: \"58f20271-d6bc-42dc-8932-fe80286fecd1\") " pod="openshift-service-ca/service-ca-9c57cc56f-4blhs" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.019933 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/97d3d9df-e52f-4eb3-8034-c5ace5c23da3-tmpfs\") pod \"packageserver-d55dfcdfc-7xjzm\" (UID: \"97d3d9df-e52f-4eb3-8034-c5ace5c23da3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.020317 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b5c47589-d94b-44fb-b31f-1f4045ea9e3c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-85x5w\" (UID: \"b5c47589-d94b-44fb-b31f-1f4045ea9e3c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85x5w" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.025731 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.028720 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8d58e84-2299-4ceb-bb86-e5e7a451b3bc-trusted-ca\") pod \"ingress-operator-5b745b69d9-44s8w\" (UID: \"c8d58e84-2299-4ceb-bb86-e5e7a451b3bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44s8w" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.048655 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.058197 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.063120 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6086ad74-5d02-4181-bb34-8c116409de42-secret-volume\") pod \"collect-profiles-29491920-s88fm\" (UID: \"6086ad74-5d02-4181-bb34-8c116409de42\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.077688 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.085204 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.085789 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.086364 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.097822 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.118100 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.137523 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.156708 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.162091 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/199bb9ad-0a44-4631-995f-c4ef6809cd54-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-r79xk\" (UID: \"199bb9ad-0a44-4631-995f-c4ef6809cd54\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r79xk" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.177716 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.198432 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.217441 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.237918 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.244964 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dddac23a-b546-4121-85df-7475aa7c5801-serving-cert\") pod \"etcd-operator-b45778765-zcwfv\" (UID: \"dddac23a-b546-4121-85df-7475aa7c5801\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.258931 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.272422 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dddac23a-b546-4121-85df-7475aa7c5801-etcd-client\") pod \"etcd-operator-b45778765-zcwfv\" (UID: \"dddac23a-b546-4121-85df-7475aa7c5801\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.278374 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.298467 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.299302 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dddac23a-b546-4121-85df-7475aa7c5801-config\") pod \"etcd-operator-b45778765-zcwfv\" (UID: \"dddac23a-b546-4121-85df-7475aa7c5801\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.318964 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.329411 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/dddac23a-b546-4121-85df-7475aa7c5801-etcd-ca\") pod \"etcd-operator-b45778765-zcwfv\" (UID: \"dddac23a-b546-4121-85df-7475aa7c5801\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.338481 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.340074 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/dddac23a-b546-4121-85df-7475aa7c5801-etcd-service-ca\") pod \"etcd-operator-b45778765-zcwfv\" (UID: \"dddac23a-b546-4121-85df-7475aa7c5801\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.358331 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.378136 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.397662 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.399067 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b5c47589-d94b-44fb-b31f-1f4045ea9e3c-images\") pod \"machine-config-operator-74547568cd-85x5w\" (UID: \"b5c47589-d94b-44fb-b31f-1f4045ea9e3c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85x5w" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.418040 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.438340 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.443741 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b5c47589-d94b-44fb-b31f-1f4045ea9e3c-proxy-tls\") pod \"machine-config-operator-74547568cd-85x5w\" (UID: \"b5c47589-d94b-44fb-b31f-1f4045ea9e3c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85x5w" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.457728 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.478441 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.484937 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71fd83ec-fa99-4caa-a216-1f1bb2be9251-metrics-certs\") pod \"router-default-5444994796-5mbhc\" (UID: \"71fd83ec-fa99-4caa-a216-1f1bb2be9251\") " pod="openshift-ingress/router-default-5444994796-5mbhc" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.498651 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.519269 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.524197 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/71fd83ec-fa99-4caa-a216-1f1bb2be9251-default-certificate\") pod \"router-default-5444994796-5mbhc\" (UID: \"71fd83ec-fa99-4caa-a216-1f1bb2be9251\") " pod="openshift-ingress/router-default-5444994796-5mbhc" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.538306 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.559256 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.561615 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71fd83ec-fa99-4caa-a216-1f1bb2be9251-service-ca-bundle\") pod \"router-default-5444994796-5mbhc\" (UID: \"71fd83ec-fa99-4caa-a216-1f1bb2be9251\") " pod="openshift-ingress/router-default-5444994796-5mbhc" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.577762 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.585613 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/71fd83ec-fa99-4caa-a216-1f1bb2be9251-stats-auth\") pod \"router-default-5444994796-5mbhc\" (UID: \"71fd83ec-fa99-4caa-a216-1f1bb2be9251\") " pod="openshift-ingress/router-default-5444994796-5mbhc" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.598269 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.618669 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.638574 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.657950 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.678300 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.698276 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.718637 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.738344 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.759209 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.773474 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5cd78bf5-69f3-4074-9dea-c7a459de6d4d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jfk2x\" (UID: \"5cd78bf5-69f3-4074-9dea-c7a459de6d4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.779602 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.796297 4745 request.go:700] Waited for 1.000151508s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0 Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.799305 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.830406 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.837688 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.843594 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cd78bf5-69f3-4074-9dea-c7a459de6d4d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jfk2x\" (UID: \"5cd78bf5-69f3-4074-9dea-c7a459de6d4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.858040 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.878198 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.898120 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.926965 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.937972 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.957319 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.977671 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 12:14:15 crc kubenswrapper[4745]: I0127 12:14:15.998154 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.002438 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b98945dd-f382-4fef-97b6-9037edd2bd9f-serving-cert\") pod \"service-ca-operator-777779d784-l8dg8\" (UID: \"b98945dd-f382-4fef-97b6-9037edd2bd9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8dg8" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.017545 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 12:14:16 crc kubenswrapper[4745]: E0127 12:14:16.019798 4745 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 27 12:14:16 crc kubenswrapper[4745]: E0127 12:14:16.019842 4745 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 12:14:16 crc kubenswrapper[4745]: E0127 12:14:16.019883 4745 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 12:14:16 crc kubenswrapper[4745]: E0127 12:14:16.019804 4745 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 12:14:16 crc kubenswrapper[4745]: E0127 12:14:16.019939 4745 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 27 12:14:16 crc kubenswrapper[4745]: E0127 12:14:16.019859 4745 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 27 12:14:16 crc kubenswrapper[4745]: E0127 12:14:16.019921 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6086ad74-5d02-4181-bb34-8c116409de42-config-volume podName:6086ad74-5d02-4181-bb34-8c116409de42 nodeName:}" failed. No retries permitted until 2026-01-27 12:14:16.519891965 +0000 UTC m=+149.324802693 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6086ad74-5d02-4181-bb34-8c116409de42-config-volume") pod "collect-profiles-29491920-s88fm" (UID: "6086ad74-5d02-4181-bb34-8c116409de42") : failed to sync configmap cache: timed out waiting for the condition Jan 27 12:14:16 crc kubenswrapper[4745]: E0127 12:14:16.020017 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97d3d9df-e52f-4eb3-8034-c5ace5c23da3-apiservice-cert podName:97d3d9df-e52f-4eb3-8034-c5ace5c23da3 nodeName:}" failed. No retries permitted until 2026-01-27 12:14:16.520001598 +0000 UTC m=+149.324912296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/97d3d9df-e52f-4eb3-8034-c5ace5c23da3-apiservice-cert") pod "packageserver-d55dfcdfc-7xjzm" (UID: "97d3d9df-e52f-4eb3-8034-c5ace5c23da3") : failed to sync secret cache: timed out waiting for the condition Jan 27 12:14:16 crc kubenswrapper[4745]: E0127 12:14:16.020034 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97d3d9df-e52f-4eb3-8034-c5ace5c23da3-webhook-cert podName:97d3d9df-e52f-4eb3-8034-c5ace5c23da3 nodeName:}" failed. No retries permitted until 2026-01-27 12:14:16.520026189 +0000 UTC m=+149.324936907 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/97d3d9df-e52f-4eb3-8034-c5ace5c23da3-webhook-cert") pod "packageserver-d55dfcdfc-7xjzm" (UID: "97d3d9df-e52f-4eb3-8034-c5ace5c23da3") : failed to sync secret cache: timed out waiting for the condition Jan 27 12:14:16 crc kubenswrapper[4745]: E0127 12:14:16.020052 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7f10ec2-24a7-445e-8ae2-49da5ad6cf71-package-server-manager-serving-cert podName:f7f10ec2-24a7-445e-8ae2-49da5ad6cf71 nodeName:}" failed. No retries permitted until 2026-01-27 12:14:16.520043759 +0000 UTC m=+149.324954467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f7f10ec2-24a7-445e-8ae2-49da5ad6cf71-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-5mbc7" (UID: "f7f10ec2-24a7-445e-8ae2-49da5ad6cf71") : failed to sync secret cache: timed out waiting for the condition Jan 27 12:14:16 crc kubenswrapper[4745]: E0127 12:14:16.020066 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f20271-d6bc-42dc-8932-fe80286fecd1-signing-key podName:58f20271-d6bc-42dc-8932-fe80286fecd1 nodeName:}" failed. No retries permitted until 2026-01-27 12:14:16.52006035 +0000 UTC m=+149.324971048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/58f20271-d6bc-42dc-8932-fe80286fecd1-signing-key") pod "service-ca-9c57cc56f-4blhs" (UID: "58f20271-d6bc-42dc-8932-fe80286fecd1") : failed to sync secret cache: timed out waiting for the condition Jan 27 12:14:16 crc kubenswrapper[4745]: E0127 12:14:16.020086 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/58f20271-d6bc-42dc-8932-fe80286fecd1-signing-cabundle podName:58f20271-d6bc-42dc-8932-fe80286fecd1 nodeName:}" failed. No retries permitted until 2026-01-27 12:14:16.52007518 +0000 UTC m=+149.324985878 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/58f20271-d6bc-42dc-8932-fe80286fecd1-signing-cabundle") pod "service-ca-9c57cc56f-4blhs" (UID: "58f20271-d6bc-42dc-8932-fe80286fecd1") : failed to sync configmap cache: timed out waiting for the condition Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.020631 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b98945dd-f382-4fef-97b6-9037edd2bd9f-config\") pod \"service-ca-operator-777779d784-l8dg8\" (UID: \"b98945dd-f382-4fef-97b6-9037edd2bd9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8dg8" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.037764 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.057907 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.077727 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.098268 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.117887 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.139135 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.184412 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mq9d\" (UniqueName: \"kubernetes.io/projected/ea5a99ae-b999-419c-9da0-1333ba6378ea-kube-api-access-2mq9d\") pod \"console-operator-58897d9998-xhn4h\" (UID: \"ea5a99ae-b999-419c-9da0-1333ba6378ea\") " pod="openshift-console-operator/console-operator-58897d9998-xhn4h" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.197417 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6swl\" (UniqueName: \"kubernetes.io/projected/46ec327c-832f-4a20-9b99-1aa3315c312f-kube-api-access-c6swl\") pod \"route-controller-manager-6576b87f9c-8nsr4\" (UID: \"46ec327c-832f-4a20-9b99-1aa3315c312f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.218847 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.227650 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ckdj\" (UniqueName: \"kubernetes.io/projected/78fe56b7-5ff3-4540-bfda-efeef43859f6-kube-api-access-5ckdj\") pod \"apiserver-7bbb656c7d-qt59f\" (UID: \"78fe56b7-5ff3-4540-bfda-efeef43859f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.237800 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.257900 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.278365 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.290599 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.298266 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.318483 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.339267 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.379079 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzzrx\" (UniqueName: \"kubernetes.io/projected/478908d6-765e-4bd8-a3ef-3142a7641a3b-kube-api-access-gzzrx\") pod \"downloads-7954f5f757-hbsbc\" (UID: \"478908d6-765e-4bd8-a3ef-3142a7641a3b\") " pod="openshift-console/downloads-7954f5f757-hbsbc" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.402739 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85z5f\" (UniqueName: \"kubernetes.io/projected/65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2-kube-api-access-85z5f\") pod \"openshift-config-operator-7777fb866f-l2frr\" (UID: \"65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.418625 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.424125 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw9zf\" (UniqueName: \"kubernetes.io/projected/cef45995-4242-499f-adeb-cc12aa630b5c-kube-api-access-lw9zf\") pod \"apiserver-76f77b778f-tf24j\" (UID: \"cef45995-4242-499f-adeb-cc12aa630b5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.424528 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.434404 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xhn4h" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.458062 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.478317 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.498455 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.510582 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.517857 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.519861 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hbsbc" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.538400 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.546423 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/58f20271-d6bc-42dc-8932-fe80286fecd1-signing-cabundle\") pod \"service-ca-9c57cc56f-4blhs\" (UID: \"58f20271-d6bc-42dc-8932-fe80286fecd1\") " pod="openshift-service-ca/service-ca-9c57cc56f-4blhs" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.546469 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6086ad74-5d02-4181-bb34-8c116409de42-config-volume\") pod \"collect-profiles-29491920-s88fm\" (UID: \"6086ad74-5d02-4181-bb34-8c116409de42\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.546499 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7f10ec2-24a7-445e-8ae2-49da5ad6cf71-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5mbc7\" (UID: \"f7f10ec2-24a7-445e-8ae2-49da5ad6cf71\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5mbc7" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.546582 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97d3d9df-e52f-4eb3-8034-c5ace5c23da3-apiservice-cert\") pod \"packageserver-d55dfcdfc-7xjzm\" (UID: \"97d3d9df-e52f-4eb3-8034-c5ace5c23da3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.546634 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/58f20271-d6bc-42dc-8932-fe80286fecd1-signing-key\") pod \"service-ca-9c57cc56f-4blhs\" (UID: \"58f20271-d6bc-42dc-8932-fe80286fecd1\") " pod="openshift-service-ca/service-ca-9c57cc56f-4blhs" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.546681 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97d3d9df-e52f-4eb3-8034-c5ace5c23da3-webhook-cert\") pod \"packageserver-d55dfcdfc-7xjzm\" (UID: \"97d3d9df-e52f-4eb3-8034-c5ace5c23da3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.548579 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/58f20271-d6bc-42dc-8932-fe80286fecd1-signing-cabundle\") pod \"service-ca-9c57cc56f-4blhs\" (UID: \"58f20271-d6bc-42dc-8932-fe80286fecd1\") " pod="openshift-service-ca/service-ca-9c57cc56f-4blhs" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.548679 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6086ad74-5d02-4181-bb34-8c116409de42-config-volume\") pod \"collect-profiles-29491920-s88fm\" (UID: \"6086ad74-5d02-4181-bb34-8c116409de42\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.555839 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97d3d9df-e52f-4eb3-8034-c5ace5c23da3-webhook-cert\") pod \"packageserver-d55dfcdfc-7xjzm\" (UID: \"97d3d9df-e52f-4eb3-8034-c5ace5c23da3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.555862 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7f10ec2-24a7-445e-8ae2-49da5ad6cf71-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5mbc7\" (UID: \"f7f10ec2-24a7-445e-8ae2-49da5ad6cf71\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5mbc7" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.557610 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.558032 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/58f20271-d6bc-42dc-8932-fe80286fecd1-signing-key\") pod \"service-ca-9c57cc56f-4blhs\" (UID: \"58f20271-d6bc-42dc-8932-fe80286fecd1\") " pod="openshift-service-ca/service-ca-9c57cc56f-4blhs" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.558537 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97d3d9df-e52f-4eb3-8034-c5ace5c23da3-apiservice-cert\") pod \"packageserver-d55dfcdfc-7xjzm\" (UID: \"97d3d9df-e52f-4eb3-8034-c5ace5c23da3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.581906 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.598871 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.618374 4745 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.638239 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.657775 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.677467 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.697511 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.705163 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.718124 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.739399 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4"] Jan 27 12:14:16 crc kubenswrapper[4745]: W0127 12:14:16.748818 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46ec327c_832f_4a20_9b99_1aa3315c312f.slice/crio-af2a7acf865056a177cde9e5acabb19333c29ce1e1aaba36f630fd42c880bb45 WatchSource:0}: Error finding container af2a7acf865056a177cde9e5acabb19333c29ce1e1aaba36f630fd42c880bb45: Status 404 returned error can't find the container with id af2a7acf865056a177cde9e5acabb19333c29ce1e1aaba36f630fd42c880bb45 Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.758607 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8d58e84-2299-4ceb-bb86-e5e7a451b3bc-bound-sa-token\") pod \"ingress-operator-5b745b69d9-44s8w\" (UID: \"c8d58e84-2299-4ceb-bb86-e5e7a451b3bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44s8w" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.780632 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldshw\" (UniqueName: \"kubernetes.io/projected/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-kube-api-access-ldshw\") pod \"controller-manager-879f6c89f-ndbtg\" (UID: \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.794414 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m72hb\" (UniqueName: \"kubernetes.io/projected/ada46f99-5088-4a53-b7b6-cc0d93f72412-kube-api-access-m72hb\") pod \"authentication-operator-69f744f599-sssvv\" (UID: \"ada46f99-5088-4a53-b7b6-cc0d93f72412\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sssvv" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.808412 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l2frr"] Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.809868 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hbsbc"] Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.815386 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32a5e4ed-16fd-4922-ac7f-515ea14b4fe5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jqmqz\" (UID: \"32a5e4ed-16fd-4922-ac7f-515ea14b4fe5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jqmqz" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.816119 4745 request.go:700] Waited for 1.907368724s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token Jan 27 12:14:16 crc kubenswrapper[4745]: W0127 12:14:16.818699 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65e3ba78_9bdc_41bc_ad3f_ddccbf79c6c2.slice/crio-20f34b310a603f8ecf9a0ead7ed7f5c56264cbdd47e3a4cb3a0b5f83480892ba WatchSource:0}: Error finding container 20f34b310a603f8ecf9a0ead7ed7f5c56264cbdd47e3a4cb3a0b5f83480892ba: Status 404 returned error can't find the container with id 20f34b310a603f8ecf9a0ead7ed7f5c56264cbdd47e3a4cb3a0b5f83480892ba Jan 27 12:14:16 crc kubenswrapper[4745]: W0127 12:14:16.828510 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod478908d6_765e_4bd8_a3ef_3142a7641a3b.slice/crio-3a94dd47d79a4a9a4fc7d88be18c7a079a2893e06b60d306e010910db374634b WatchSource:0}: Error finding container 3a94dd47d79a4a9a4fc7d88be18c7a079a2893e06b60d306e010910db374634b: Status 404 returned error can't find the container with id 3a94dd47d79a4a9a4fc7d88be18c7a079a2893e06b60d306e010910db374634b Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.831638 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz9tx\" (UniqueName: \"kubernetes.io/projected/2f032c11-b3c4-45f7-be15-d5873624adcd-kube-api-access-fz9tx\") pod \"dns-operator-744455d44c-scl78\" (UID: \"2f032c11-b3c4-45f7-be15-d5873624adcd\") " pod="openshift-dns-operator/dns-operator-744455d44c-scl78" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.855453 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsvtv\" (UniqueName: \"kubernetes.io/projected/32a5e4ed-16fd-4922-ac7f-515ea14b4fe5-kube-api-access-gsvtv\") pod \"cluster-image-registry-operator-dc59b4c8b-jqmqz\" (UID: \"32a5e4ed-16fd-4922-ac7f-515ea14b4fe5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jqmqz" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.872287 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5c25\" (UniqueName: \"kubernetes.io/projected/76633ed6-00c1-4c35-aa9c-93c0867d676d-kube-api-access-z5c25\") pod \"cluster-samples-operator-665b6dd947-lrlf2\" (UID: \"76633ed6-00c1-4c35-aa9c-93c0867d676d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lrlf2" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.886219 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tf24j"] Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.896007 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/792e272a-64cd-47cd-8aac-eeb295e49f05-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-86448\" (UID: \"792e272a-64cd-47cd-8aac-eeb295e49f05\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86448" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.913233 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-675gh\" (UniqueName: \"kubernetes.io/projected/8481d31f-f701-4821-9893-5ebf45d2dcb8-kube-api-access-675gh\") pod \"oauth-openshift-558db77b4-7rjtn\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.938480 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xhn4h"] Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.941758 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f"] Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.946446 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lrlf2" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.948441 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.951103 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfg7d\" (UniqueName: \"kubernetes.io/projected/26625d33-dbca-4e3f-97eb-34956096bf8a-kube-api-access-cfg7d\") pod \"openshift-apiserver-operator-796bbdcf4f-hv45d\" (UID: \"26625d33-dbca-4e3f-97eb-34956096bf8a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hv45d" Jan 27 12:14:16 crc kubenswrapper[4745]: W0127 12:14:16.953599 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea5a99ae_b999_419c_9da0_1333ba6378ea.slice/crio-74c5eb5a5a07bf4ceaa35b9a1d4ac437a25f4192e5b3f3c4e46a0c3e0465f3e0 WatchSource:0}: Error finding container 74c5eb5a5a07bf4ceaa35b9a1d4ac437a25f4192e5b3f3c4e46a0c3e0465f3e0: Status 404 returned error can't find the container with id 74c5eb5a5a07bf4ceaa35b9a1d4ac437a25f4192e5b3f3c4e46a0c3e0465f3e0 Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.958587 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2944\" (UniqueName: \"kubernetes.io/projected/f880472d-b13f-4b62-946f-3d74aafe5743-kube-api-access-g2944\") pod \"machine-api-operator-5694c8668f-4hbbw\" (UID: \"f880472d-b13f-4b62-946f-3d74aafe5743\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4hbbw" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.968148 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tf24j" event={"ID":"cef45995-4242-499f-adeb-cc12aa630b5c","Type":"ContainerStarted","Data":"9f69d54055fc5ef86421cac18c612cb7e3ca1f8b87423a10fc3293cb391e4d48"} Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.968767 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hbsbc" event={"ID":"478908d6-765e-4bd8-a3ef-3142a7641a3b","Type":"ContainerStarted","Data":"3a94dd47d79a4a9a4fc7d88be18c7a079a2893e06b60d306e010910db374634b"} Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.974997 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-scl78" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.977106 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" event={"ID":"78fe56b7-5ff3-4540-bfda-efeef43859f6","Type":"ContainerStarted","Data":"64a875d09fe874a38d78e59e6373218d524556e9e994b1aaa76d5758c7287573"} Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.977610 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59d3b2e2-c186-4551-b6b6-962b13b3a058-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5tpbz\" (UID: \"59d3b2e2-c186-4551-b6b6-962b13b3a058\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5tpbz" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.982518 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" event={"ID":"65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2","Type":"ContainerStarted","Data":"20f34b310a603f8ecf9a0ead7ed7f5c56264cbdd47e3a4cb3a0b5f83480892ba"} Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.983044 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-sssvv" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.983446 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xhn4h" event={"ID":"ea5a99ae-b999-419c-9da0-1333ba6378ea","Type":"ContainerStarted","Data":"74c5eb5a5a07bf4ceaa35b9a1d4ac437a25f4192e5b3f3c4e46a0c3e0465f3e0"} Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.984508 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" event={"ID":"46ec327c-832f-4a20-9b99-1aa3315c312f","Type":"ContainerStarted","Data":"af2a7acf865056a177cde9e5acabb19333c29ce1e1aaba36f630fd42c880bb45"} Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.991686 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.995623 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsrkr\" (UniqueName: \"kubernetes.io/projected/c8d58e84-2299-4ceb-bb86-e5e7a451b3bc-kube-api-access-qsrkr\") pod \"ingress-operator-5b745b69d9-44s8w\" (UID: \"c8d58e84-2299-4ceb-bb86-e5e7a451b3bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44s8w" Jan 27 12:14:16 crc kubenswrapper[4745]: I0127 12:14:16.996145 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hv45d" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.011784 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4hmn\" (UniqueName: \"kubernetes.io/projected/94eb6425-bdf2-43d1-926e-c94700a985be-kube-api-access-v4hmn\") pod \"console-f9d7485db-zqrwf\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.049746 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4hbbw" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.050240 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8scc\" (UniqueName: \"kubernetes.io/projected/b18e2d30-da7f-4c5f-9700-20c7c05b1043-kube-api-access-z8scc\") pod \"openshift-controller-manager-operator-756b6f6bc6-rxlbk\" (UID: \"b18e2d30-da7f-4c5f-9700-20c7c05b1043\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rxlbk" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.055249 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86448" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.055482 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht7zf\" (UniqueName: \"kubernetes.io/projected/a4a46bed-8781-4e46-a70e-868c24144a1f-kube-api-access-ht7zf\") pod \"machine-approver-56656f9798-s4vnk\" (UID: \"a4a46bed-8781-4e46-a70e-868c24144a1f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s4vnk" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.060830 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5tpbz" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.066069 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jqmqz" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.068484 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44s8w" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.098473 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kgsk\" (UniqueName: \"kubernetes.io/projected/6086ad74-5d02-4181-bb34-8c116409de42-kube-api-access-6kgsk\") pod \"collect-profiles-29491920-s88fm\" (UID: \"6086ad74-5d02-4181-bb34-8c116409de42\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.112685 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkn95\" (UniqueName: \"kubernetes.io/projected/199bb9ad-0a44-4631-995f-c4ef6809cd54-kube-api-access-xkn95\") pod \"multus-admission-controller-857f4d67dd-r79xk\" (UID: \"199bb9ad-0a44-4631-995f-c4ef6809cd54\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r79xk" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.140538 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lrlf2"] Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.140695 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkhsp\" (UniqueName: \"kubernetes.io/projected/97d3d9df-e52f-4eb3-8034-c5ace5c23da3-kube-api-access-rkhsp\") pod \"packageserver-d55dfcdfc-7xjzm\" (UID: \"97d3d9df-e52f-4eb3-8034-c5ace5c23da3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.153936 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcwpk\" (UniqueName: \"kubernetes.io/projected/58f20271-d6bc-42dc-8932-fe80286fecd1-kube-api-access-xcwpk\") pod \"service-ca-9c57cc56f-4blhs\" (UID: \"58f20271-d6bc-42dc-8932-fe80286fecd1\") " pod="openshift-service-ca/service-ca-9c57cc56f-4blhs" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.171418 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7rjtn"] Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.174720 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6vs7\" (UniqueName: \"kubernetes.io/projected/dddac23a-b546-4121-85df-7475aa7c5801-kube-api-access-j6vs7\") pod \"etcd-operator-b45778765-zcwfv\" (UID: \"dddac23a-b546-4121-85df-7475aa7c5801\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.182757 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.190448 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4blhs" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.192399 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpm6x\" (UniqueName: \"kubernetes.io/projected/b5c47589-d94b-44fb-b31f-1f4045ea9e3c-kube-api-access-rpm6x\") pod \"machine-config-operator-74547568cd-85x5w\" (UID: \"b5c47589-d94b-44fb-b31f-1f4045ea9e3c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85x5w" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.197272 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.220851 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv87l\" (UniqueName: \"kubernetes.io/projected/f7f10ec2-24a7-445e-8ae2-49da5ad6cf71-kube-api-access-wv87l\") pod \"package-server-manager-789f6589d5-5mbc7\" (UID: \"f7f10ec2-24a7-445e-8ae2-49da5ad6cf71\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5mbc7" Jan 27 12:14:17 crc kubenswrapper[4745]: W0127 12:14:17.223362 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8481d31f_f701_4821_9893_5ebf45d2dcb8.slice/crio-9a9d608edecbd2447e88cb41653f8576f10819ac176b3420633935c74a10f58c WatchSource:0}: Error finding container 9a9d608edecbd2447e88cb41653f8576f10819ac176b3420633935c74a10f58c: Status 404 returned error can't find the container with id 9a9d608edecbd2447e88cb41653f8576f10819ac176b3420633935c74a10f58c Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.232023 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7942l\" (UniqueName: \"kubernetes.io/projected/71fd83ec-fa99-4caa-a216-1f1bb2be9251-kube-api-access-7942l\") pod \"router-default-5444994796-5mbhc\" (UID: \"71fd83ec-fa99-4caa-a216-1f1bb2be9251\") " pod="openshift-ingress/router-default-5444994796-5mbhc" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.254192 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkwbv\" (UniqueName: \"kubernetes.io/projected/5cd78bf5-69f3-4074-9dea-c7a459de6d4d-kube-api-access-zkwbv\") pod \"marketplace-operator-79b997595-jfk2x\" (UID: \"5cd78bf5-69f3-4074-9dea-c7a459de6d4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.262290 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rxlbk" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.274592 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtr8q\" (UniqueName: \"kubernetes.io/projected/7d3ec1e8-113d-4dbd-9a48-6daea4c74cdf-kube-api-access-mtr8q\") pod \"migrator-59844c95c7-ks2hk\" (UID: \"7d3ec1e8-113d-4dbd-9a48-6daea4c74cdf\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ks2hk" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.291730 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl5bj\" (UniqueName: \"kubernetes.io/projected/b98945dd-f382-4fef-97b6-9037edd2bd9f-kube-api-access-xl5bj\") pod \"service-ca-operator-777779d784-l8dg8\" (UID: \"b98945dd-f382-4fef-97b6-9037edd2bd9f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8dg8" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.315194 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.315498 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s4vnk" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.358125 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/67cab9e2-eb12-495b-a350-8fc0886c1a29-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.358577 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a76652e-d0b0-449d-9e41-b363948890bf-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nqb7h\" (UID: \"0a76652e-d0b0-449d-9e41-b363948890bf\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqb7h" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.358634 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/89c991f5-72eb-4c8e-a31f-9db5b46ffc5d-srv-cert\") pod \"olm-operator-6b444d44fb-skmp4\" (UID: \"89c991f5-72eb-4c8e-a31f-9db5b46ffc5d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.358667 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58509a9e-6184-4459-9e85-f8e999f965e3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p58fj\" (UID: \"58509a9e-6184-4459-9e85-f8e999f965e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p58fj" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.358714 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4c71dc30-ef02-41cf-a2f8-973dfc972054-srv-cert\") pod \"catalog-operator-68c6474976-6wlnl\" (UID: \"4c71dc30-ef02-41cf-a2f8-973dfc972054\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.358737 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7db0b54-29c2-4ab4-b919-c83dcbb8f094-proxy-tls\") pod \"machine-config-controller-84d6567774-gkffs\" (UID: \"a7db0b54-29c2-4ab4-b919-c83dcbb8f094\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gkffs" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.358820 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2gwb\" (UniqueName: \"kubernetes.io/projected/0a76652e-d0b0-449d-9e41-b363948890bf-kube-api-access-b2gwb\") pod \"kube-storage-version-migrator-operator-b67b599dd-nqb7h\" (UID: \"0a76652e-d0b0-449d-9e41-b363948890bf\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqb7h" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.358850 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/67cab9e2-eb12-495b-a350-8fc0886c1a29-registry-certificates\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.358883 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.358911 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/89c991f5-72eb-4c8e-a31f-9db5b46ffc5d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-skmp4\" (UID: \"89c991f5-72eb-4c8e-a31f-9db5b46ffc5d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.358934 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67cab9e2-eb12-495b-a350-8fc0886c1a29-trusted-ca\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.358978 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac22d21e-ce2f-4e46-8b65-e6c84480b954-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bgpwd\" (UID: \"ac22d21e-ce2f-4e46-8b65-e6c84480b954\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bgpwd" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.359020 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4c71dc30-ef02-41cf-a2f8-973dfc972054-profile-collector-cert\") pod \"catalog-operator-68c6474976-6wlnl\" (UID: \"4c71dc30-ef02-41cf-a2f8-973dfc972054\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.359057 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmw47\" (UniqueName: \"kubernetes.io/projected/ac22d21e-ce2f-4e46-8b65-e6c84480b954-kube-api-access-xmw47\") pod \"control-plane-machine-set-operator-78cbb6b69f-bgpwd\" (UID: \"ac22d21e-ce2f-4e46-8b65-e6c84480b954\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bgpwd" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.359078 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwtw9\" (UniqueName: \"kubernetes.io/projected/a7db0b54-29c2-4ab4-b919-c83dcbb8f094-kube-api-access-hwtw9\") pod \"machine-config-controller-84d6567774-gkffs\" (UID: \"a7db0b54-29c2-4ab4-b919-c83dcbb8f094\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gkffs" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.359119 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7db0b54-29c2-4ab4-b919-c83dcbb8f094-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-gkffs\" (UID: \"a7db0b54-29c2-4ab4-b919-c83dcbb8f094\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gkffs" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.359145 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67cab9e2-eb12-495b-a350-8fc0886c1a29-bound-sa-token\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.359166 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58509a9e-6184-4459-9e85-f8e999f965e3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p58fj\" (UID: \"58509a9e-6184-4459-9e85-f8e999f965e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p58fj" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.359247 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wk2d\" (UniqueName: \"kubernetes.io/projected/67cab9e2-eb12-495b-a350-8fc0886c1a29-kube-api-access-4wk2d\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.359266 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58509a9e-6184-4459-9e85-f8e999f965e3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p58fj\" (UID: \"58509a9e-6184-4459-9e85-f8e999f965e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p58fj" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.359285 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/67cab9e2-eb12-495b-a350-8fc0886c1a29-registry-tls\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.359324 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h549q\" (UniqueName: \"kubernetes.io/projected/4c71dc30-ef02-41cf-a2f8-973dfc972054-kube-api-access-h549q\") pod \"catalog-operator-68c6474976-6wlnl\" (UID: \"4c71dc30-ef02-41cf-a2f8-973dfc972054\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.359344 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz74n\" (UniqueName: \"kubernetes.io/projected/89c991f5-72eb-4c8e-a31f-9db5b46ffc5d-kube-api-access-wz74n\") pod \"olm-operator-6b444d44fb-skmp4\" (UID: \"89c991f5-72eb-4c8e-a31f-9db5b46ffc5d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.359382 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a76652e-d0b0-449d-9e41-b363948890bf-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nqb7h\" (UID: \"0a76652e-d0b0-449d-9e41-b363948890bf\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqb7h" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.359405 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/67cab9e2-eb12-495b-a350-8fc0886c1a29-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: E0127 12:14:17.360196 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:17.860157068 +0000 UTC m=+150.665067756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.377176 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-r79xk" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.388165 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.411146 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85x5w" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.416695 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5mbhc" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.420351 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-scl78"] Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.449503 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.455711 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ks2hk" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.460104 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.460499 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wk2d\" (UniqueName: \"kubernetes.io/projected/67cab9e2-eb12-495b-a350-8fc0886c1a29-kube-api-access-4wk2d\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.460526 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58509a9e-6184-4459-9e85-f8e999f965e3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p58fj\" (UID: \"58509a9e-6184-4459-9e85-f8e999f965e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p58fj" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.460591 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/67cab9e2-eb12-495b-a350-8fc0886c1a29-registry-tls\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.460695 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h549q\" (UniqueName: \"kubernetes.io/projected/4c71dc30-ef02-41cf-a2f8-973dfc972054-kube-api-access-h549q\") pod \"catalog-operator-68c6474976-6wlnl\" (UID: \"4c71dc30-ef02-41cf-a2f8-973dfc972054\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.460720 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz74n\" (UniqueName: \"kubernetes.io/projected/89c991f5-72eb-4c8e-a31f-9db5b46ffc5d-kube-api-access-wz74n\") pod \"olm-operator-6b444d44fb-skmp4\" (UID: \"89c991f5-72eb-4c8e-a31f-9db5b46ffc5d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.460799 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/449f3406-19aa-43e5-8364-efd6f68ec1c7-node-bootstrap-token\") pod \"machine-config-server-njxtf\" (UID: \"449f3406-19aa-43e5-8364-efd6f68ec1c7\") " pod="openshift-machine-config-operator/machine-config-server-njxtf" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.460831 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/449f3406-19aa-43e5-8364-efd6f68ec1c7-certs\") pod \"machine-config-server-njxtf\" (UID: \"449f3406-19aa-43e5-8364-efd6f68ec1c7\") " pod="openshift-machine-config-operator/machine-config-server-njxtf" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.460874 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/67cab9e2-eb12-495b-a350-8fc0886c1a29-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.460898 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a76652e-d0b0-449d-9e41-b363948890bf-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nqb7h\" (UID: \"0a76652e-d0b0-449d-9e41-b363948890bf\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqb7h" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.460920 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0b5cf703-06c8-4a98-b58b-71543d23affe-csi-data-dir\") pod \"csi-hostpathplugin-v7vzv\" (UID: \"0b5cf703-06c8-4a98-b58b-71543d23affe\") " pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.460942 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f6423c2b-6727-4dd3-9dc2-d4c6d1dd4ebd-metrics-tls\") pod \"dns-default-lwzbq\" (UID: \"f6423c2b-6727-4dd3-9dc2-d4c6d1dd4ebd\") " pod="openshift-dns/dns-default-lwzbq" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.461025 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0b5cf703-06c8-4a98-b58b-71543d23affe-plugins-dir\") pod \"csi-hostpathplugin-v7vzv\" (UID: \"0b5cf703-06c8-4a98-b58b-71543d23affe\") " pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.461057 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8frz\" (UniqueName: \"kubernetes.io/projected/0b5cf703-06c8-4a98-b58b-71543d23affe-kube-api-access-z8frz\") pod \"csi-hostpathplugin-v7vzv\" (UID: \"0b5cf703-06c8-4a98-b58b-71543d23affe\") " pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.461102 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/67cab9e2-eb12-495b-a350-8fc0886c1a29-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.461124 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a76652e-d0b0-449d-9e41-b363948890bf-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nqb7h\" (UID: \"0a76652e-d0b0-449d-9e41-b363948890bf\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqb7h" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.461212 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/89c991f5-72eb-4c8e-a31f-9db5b46ffc5d-srv-cert\") pod \"olm-operator-6b444d44fb-skmp4\" (UID: \"89c991f5-72eb-4c8e-a31f-9db5b46ffc5d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4" Jan 27 12:14:17 crc kubenswrapper[4745]: E0127 12:14:17.461256 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:17.961235663 +0000 UTC m=+150.766146371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.461336 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58509a9e-6184-4459-9e85-f8e999f965e3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p58fj\" (UID: \"58509a9e-6184-4459-9e85-f8e999f965e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p58fj" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.461367 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4c71dc30-ef02-41cf-a2f8-973dfc972054-srv-cert\") pod \"catalog-operator-68c6474976-6wlnl\" (UID: \"4c71dc30-ef02-41cf-a2f8-973dfc972054\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.461431 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7db0b54-29c2-4ab4-b919-c83dcbb8f094-proxy-tls\") pod \"machine-config-controller-84d6567774-gkffs\" (UID: \"a7db0b54-29c2-4ab4-b919-c83dcbb8f094\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gkffs" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.472206 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58509a9e-6184-4459-9e85-f8e999f965e3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p58fj\" (UID: \"58509a9e-6184-4459-9e85-f8e999f965e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p58fj" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.479937 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/67cab9e2-eb12-495b-a350-8fc0886c1a29-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.484646 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a76652e-d0b0-449d-9e41-b363948890bf-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nqb7h\" (UID: \"0a76652e-d0b0-449d-9e41-b363948890bf\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqb7h" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.485216 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a76652e-d0b0-449d-9e41-b363948890bf-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nqb7h\" (UID: \"0a76652e-d0b0-449d-9e41-b363948890bf\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqb7h" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.485327 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/67cab9e2-eb12-495b-a350-8fc0886c1a29-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.488359 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5mbc7" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.488399 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/67cab9e2-eb12-495b-a350-8fc0886c1a29-registry-certificates\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.488442 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2gwb\" (UniqueName: \"kubernetes.io/projected/0a76652e-d0b0-449d-9e41-b363948890bf-kube-api-access-b2gwb\") pod \"kube-storage-version-migrator-operator-b67b599dd-nqb7h\" (UID: \"0a76652e-d0b0-449d-9e41-b363948890bf\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqb7h" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.488845 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.488969 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0b5cf703-06c8-4a98-b58b-71543d23affe-registration-dir\") pod \"csi-hostpathplugin-v7vzv\" (UID: \"0b5cf703-06c8-4a98-b58b-71543d23affe\") " pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.489569 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7db0b54-29c2-4ab4-b919-c83dcbb8f094-proxy-tls\") pod \"machine-config-controller-84d6567774-gkffs\" (UID: \"a7db0b54-29c2-4ab4-b919-c83dcbb8f094\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gkffs" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.489704 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7fhs\" (UniqueName: \"kubernetes.io/projected/f6423c2b-6727-4dd3-9dc2-d4c6d1dd4ebd-kube-api-access-g7fhs\") pod \"dns-default-lwzbq\" (UID: \"f6423c2b-6727-4dd3-9dc2-d4c6d1dd4ebd\") " pod="openshift-dns/dns-default-lwzbq" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.492533 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/67cab9e2-eb12-495b-a350-8fc0886c1a29-registry-certificates\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.499229 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8dg8" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.520677 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/67cab9e2-eb12-495b-a350-8fc0886c1a29-registry-tls\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: E0127 12:14:17.522775 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:18.022751282 +0000 UTC m=+150.827661970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.524528 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/89c991f5-72eb-4c8e-a31f-9db5b46ffc5d-srv-cert\") pod \"olm-operator-6b444d44fb-skmp4\" (UID: \"89c991f5-72eb-4c8e-a31f-9db5b46ffc5d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.509209 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/89c991f5-72eb-4c8e-a31f-9db5b46ffc5d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-skmp4\" (UID: \"89c991f5-72eb-4c8e-a31f-9db5b46ffc5d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.526365 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67cab9e2-eb12-495b-a350-8fc0886c1a29-trusted-ca\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.527369 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h549q\" (UniqueName: \"kubernetes.io/projected/4c71dc30-ef02-41cf-a2f8-973dfc972054-kube-api-access-h549q\") pod \"catalog-operator-68c6474976-6wlnl\" (UID: \"4c71dc30-ef02-41cf-a2f8-973dfc972054\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.528037 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac22d21e-ce2f-4e46-8b65-e6c84480b954-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bgpwd\" (UID: \"ac22d21e-ce2f-4e46-8b65-e6c84480b954\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bgpwd" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.528120 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4c71dc30-ef02-41cf-a2f8-973dfc972054-profile-collector-cert\") pod \"catalog-operator-68c6474976-6wlnl\" (UID: \"4c71dc30-ef02-41cf-a2f8-973dfc972054\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.528196 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cebf393-91a7-4018-8f08-358ca7f7155b-cert\") pod \"ingress-canary-9bws5\" (UID: \"8cebf393-91a7-4018-8f08-358ca7f7155b\") " pod="openshift-ingress-canary/ingress-canary-9bws5" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.528222 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wkfn\" (UniqueName: \"kubernetes.io/projected/8cebf393-91a7-4018-8f08-358ca7f7155b-kube-api-access-9wkfn\") pod \"ingress-canary-9bws5\" (UID: \"8cebf393-91a7-4018-8f08-358ca7f7155b\") " pod="openshift-ingress-canary/ingress-canary-9bws5" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.528270 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brnqb\" (UniqueName: \"kubernetes.io/projected/449f3406-19aa-43e5-8364-efd6f68ec1c7-kube-api-access-brnqb\") pod \"machine-config-server-njxtf\" (UID: \"449f3406-19aa-43e5-8364-efd6f68ec1c7\") " pod="openshift-machine-config-operator/machine-config-server-njxtf" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.528303 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6423c2b-6727-4dd3-9dc2-d4c6d1dd4ebd-config-volume\") pod \"dns-default-lwzbq\" (UID: \"f6423c2b-6727-4dd3-9dc2-d4c6d1dd4ebd\") " pod="openshift-dns/dns-default-lwzbq" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.528328 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67cab9e2-eb12-495b-a350-8fc0886c1a29-trusted-ca\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.528378 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmw47\" (UniqueName: \"kubernetes.io/projected/ac22d21e-ce2f-4e46-8b65-e6c84480b954-kube-api-access-xmw47\") pod \"control-plane-machine-set-operator-78cbb6b69f-bgpwd\" (UID: \"ac22d21e-ce2f-4e46-8b65-e6c84480b954\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bgpwd" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.528401 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwtw9\" (UniqueName: \"kubernetes.io/projected/a7db0b54-29c2-4ab4-b919-c83dcbb8f094-kube-api-access-hwtw9\") pod \"machine-config-controller-84d6567774-gkffs\" (UID: \"a7db0b54-29c2-4ab4-b919-c83dcbb8f094\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gkffs" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.528429 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7db0b54-29c2-4ab4-b919-c83dcbb8f094-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-gkffs\" (UID: \"a7db0b54-29c2-4ab4-b919-c83dcbb8f094\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gkffs" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.528484 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67cab9e2-eb12-495b-a350-8fc0886c1a29-bound-sa-token\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.528501 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58509a9e-6184-4459-9e85-f8e999f965e3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p58fj\" (UID: \"58509a9e-6184-4459-9e85-f8e999f965e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p58fj" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.528544 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0b5cf703-06c8-4a98-b58b-71543d23affe-mountpoint-dir\") pod \"csi-hostpathplugin-v7vzv\" (UID: \"0b5cf703-06c8-4a98-b58b-71543d23affe\") " pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.528597 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0b5cf703-06c8-4a98-b58b-71543d23affe-socket-dir\") pod \"csi-hostpathplugin-v7vzv\" (UID: \"0b5cf703-06c8-4a98-b58b-71543d23affe\") " pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.529675 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58509a9e-6184-4459-9e85-f8e999f965e3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p58fj\" (UID: \"58509a9e-6184-4459-9e85-f8e999f965e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p58fj" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.530635 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7db0b54-29c2-4ab4-b919-c83dcbb8f094-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-gkffs\" (UID: \"a7db0b54-29c2-4ab4-b919-c83dcbb8f094\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gkffs" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.532844 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wk2d\" (UniqueName: \"kubernetes.io/projected/67cab9e2-eb12-495b-a350-8fc0886c1a29-kube-api-access-4wk2d\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.535447 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4c71dc30-ef02-41cf-a2f8-973dfc972054-srv-cert\") pod \"catalog-operator-68c6474976-6wlnl\" (UID: \"4c71dc30-ef02-41cf-a2f8-973dfc972054\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.542351 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac22d21e-ce2f-4e46-8b65-e6c84480b954-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bgpwd\" (UID: \"ac22d21e-ce2f-4e46-8b65-e6c84480b954\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bgpwd" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.544348 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2gwb\" (UniqueName: \"kubernetes.io/projected/0a76652e-d0b0-449d-9e41-b363948890bf-kube-api-access-b2gwb\") pod \"kube-storage-version-migrator-operator-b67b599dd-nqb7h\" (UID: \"0a76652e-d0b0-449d-9e41-b363948890bf\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqb7h" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.551233 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4c71dc30-ef02-41cf-a2f8-973dfc972054-profile-collector-cert\") pod \"catalog-operator-68c6474976-6wlnl\" (UID: \"4c71dc30-ef02-41cf-a2f8-973dfc972054\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.551957 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz74n\" (UniqueName: \"kubernetes.io/projected/89c991f5-72eb-4c8e-a31f-9db5b46ffc5d-kube-api-access-wz74n\") pod \"olm-operator-6b444d44fb-skmp4\" (UID: \"89c991f5-72eb-4c8e-a31f-9db5b46ffc5d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.553983 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/89c991f5-72eb-4c8e-a31f-9db5b46ffc5d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-skmp4\" (UID: \"89c991f5-72eb-4c8e-a31f-9db5b46ffc5d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.574630 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67cab9e2-eb12-495b-a350-8fc0886c1a29-bound-sa-token\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.578417 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4hbbw"] Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.592786 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmw47\" (UniqueName: \"kubernetes.io/projected/ac22d21e-ce2f-4e46-8b65-e6c84480b954-kube-api-access-xmw47\") pod \"control-plane-machine-set-operator-78cbb6b69f-bgpwd\" (UID: \"ac22d21e-ce2f-4e46-8b65-e6c84480b954\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bgpwd" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.633548 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.633736 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cebf393-91a7-4018-8f08-358ca7f7155b-cert\") pod \"ingress-canary-9bws5\" (UID: \"8cebf393-91a7-4018-8f08-358ca7f7155b\") " pod="openshift-ingress-canary/ingress-canary-9bws5" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.633769 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wkfn\" (UniqueName: \"kubernetes.io/projected/8cebf393-91a7-4018-8f08-358ca7f7155b-kube-api-access-9wkfn\") pod \"ingress-canary-9bws5\" (UID: \"8cebf393-91a7-4018-8f08-358ca7f7155b\") " pod="openshift-ingress-canary/ingress-canary-9bws5" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.633791 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brnqb\" (UniqueName: \"kubernetes.io/projected/449f3406-19aa-43e5-8364-efd6f68ec1c7-kube-api-access-brnqb\") pod \"machine-config-server-njxtf\" (UID: \"449f3406-19aa-43e5-8364-efd6f68ec1c7\") " pod="openshift-machine-config-operator/machine-config-server-njxtf" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.633831 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6423c2b-6727-4dd3-9dc2-d4c6d1dd4ebd-config-volume\") pod \"dns-default-lwzbq\" (UID: \"f6423c2b-6727-4dd3-9dc2-d4c6d1dd4ebd\") " pod="openshift-dns/dns-default-lwzbq" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.633890 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0b5cf703-06c8-4a98-b58b-71543d23affe-mountpoint-dir\") pod \"csi-hostpathplugin-v7vzv\" (UID: \"0b5cf703-06c8-4a98-b58b-71543d23affe\") " pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.633914 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0b5cf703-06c8-4a98-b58b-71543d23affe-socket-dir\") pod \"csi-hostpathplugin-v7vzv\" (UID: \"0b5cf703-06c8-4a98-b58b-71543d23affe\") " pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.633971 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/449f3406-19aa-43e5-8364-efd6f68ec1c7-node-bootstrap-token\") pod \"machine-config-server-njxtf\" (UID: \"449f3406-19aa-43e5-8364-efd6f68ec1c7\") " pod="openshift-machine-config-operator/machine-config-server-njxtf" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.633992 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/449f3406-19aa-43e5-8364-efd6f68ec1c7-certs\") pod \"machine-config-server-njxtf\" (UID: \"449f3406-19aa-43e5-8364-efd6f68ec1c7\") " pod="openshift-machine-config-operator/machine-config-server-njxtf" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.634017 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0b5cf703-06c8-4a98-b58b-71543d23affe-csi-data-dir\") pod \"csi-hostpathplugin-v7vzv\" (UID: \"0b5cf703-06c8-4a98-b58b-71543d23affe\") " pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.634038 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f6423c2b-6727-4dd3-9dc2-d4c6d1dd4ebd-metrics-tls\") pod \"dns-default-lwzbq\" (UID: \"f6423c2b-6727-4dd3-9dc2-d4c6d1dd4ebd\") " pod="openshift-dns/dns-default-lwzbq" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.634060 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0b5cf703-06c8-4a98-b58b-71543d23affe-plugins-dir\") pod \"csi-hostpathplugin-v7vzv\" (UID: \"0b5cf703-06c8-4a98-b58b-71543d23affe\") " pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.634104 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8frz\" (UniqueName: \"kubernetes.io/projected/0b5cf703-06c8-4a98-b58b-71543d23affe-kube-api-access-z8frz\") pod \"csi-hostpathplugin-v7vzv\" (UID: \"0b5cf703-06c8-4a98-b58b-71543d23affe\") " pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.634153 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0b5cf703-06c8-4a98-b58b-71543d23affe-registration-dir\") pod \"csi-hostpathplugin-v7vzv\" (UID: \"0b5cf703-06c8-4a98-b58b-71543d23affe\") " pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.634177 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7fhs\" (UniqueName: \"kubernetes.io/projected/f6423c2b-6727-4dd3-9dc2-d4c6d1dd4ebd-kube-api-access-g7fhs\") pod \"dns-default-lwzbq\" (UID: \"f6423c2b-6727-4dd3-9dc2-d4c6d1dd4ebd\") " pod="openshift-dns/dns-default-lwzbq" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.635110 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6423c2b-6727-4dd3-9dc2-d4c6d1dd4ebd-config-volume\") pod \"dns-default-lwzbq\" (UID: \"f6423c2b-6727-4dd3-9dc2-d4c6d1dd4ebd\") " pod="openshift-dns/dns-default-lwzbq" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.635189 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0b5cf703-06c8-4a98-b58b-71543d23affe-mountpoint-dir\") pod \"csi-hostpathplugin-v7vzv\" (UID: \"0b5cf703-06c8-4a98-b58b-71543d23affe\") " pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.635391 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0b5cf703-06c8-4a98-b58b-71543d23affe-socket-dir\") pod \"csi-hostpathplugin-v7vzv\" (UID: \"0b5cf703-06c8-4a98-b58b-71543d23affe\") " pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" Jan 27 12:14:17 crc kubenswrapper[4745]: E0127 12:14:17.635463 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:18.1354094 +0000 UTC m=+150.940320288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.641658 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0b5cf703-06c8-4a98-b58b-71543d23affe-plugins-dir\") pod \"csi-hostpathplugin-v7vzv\" (UID: \"0b5cf703-06c8-4a98-b58b-71543d23affe\") " pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.641785 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0b5cf703-06c8-4a98-b58b-71543d23affe-csi-data-dir\") pod \"csi-hostpathplugin-v7vzv\" (UID: \"0b5cf703-06c8-4a98-b58b-71543d23affe\") " pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.642382 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0b5cf703-06c8-4a98-b58b-71543d23affe-registration-dir\") pod \"csi-hostpathplugin-v7vzv\" (UID: \"0b5cf703-06c8-4a98-b58b-71543d23affe\") " pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.652499 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/449f3406-19aa-43e5-8364-efd6f68ec1c7-node-bootstrap-token\") pod \"machine-config-server-njxtf\" (UID: \"449f3406-19aa-43e5-8364-efd6f68ec1c7\") " pod="openshift-machine-config-operator/machine-config-server-njxtf" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.671480 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/449f3406-19aa-43e5-8364-efd6f68ec1c7-certs\") pod \"machine-config-server-njxtf\" (UID: \"449f3406-19aa-43e5-8364-efd6f68ec1c7\") " pod="openshift-machine-config-operator/machine-config-server-njxtf" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.677632 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f6423c2b-6727-4dd3-9dc2-d4c6d1dd4ebd-metrics-tls\") pod \"dns-default-lwzbq\" (UID: \"f6423c2b-6727-4dd3-9dc2-d4c6d1dd4ebd\") " pod="openshift-dns/dns-default-lwzbq" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.680932 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwtw9\" (UniqueName: \"kubernetes.io/projected/a7db0b54-29c2-4ab4-b919-c83dcbb8f094-kube-api-access-hwtw9\") pod \"machine-config-controller-84d6567774-gkffs\" (UID: \"a7db0b54-29c2-4ab4-b919-c83dcbb8f094\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gkffs" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.681248 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.690336 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7fhs\" (UniqueName: \"kubernetes.io/projected/f6423c2b-6727-4dd3-9dc2-d4c6d1dd4ebd-kube-api-access-g7fhs\") pod \"dns-default-lwzbq\" (UID: \"f6423c2b-6727-4dd3-9dc2-d4c6d1dd4ebd\") " pod="openshift-dns/dns-default-lwzbq" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.690747 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cebf393-91a7-4018-8f08-358ca7f7155b-cert\") pod \"ingress-canary-9bws5\" (UID: \"8cebf393-91a7-4018-8f08-358ca7f7155b\") " pod="openshift-ingress-canary/ingress-canary-9bws5" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.694375 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58509a9e-6184-4459-9e85-f8e999f965e3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p58fj\" (UID: \"58509a9e-6184-4459-9e85-f8e999f965e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p58fj" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.694969 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.701558 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brnqb\" (UniqueName: \"kubernetes.io/projected/449f3406-19aa-43e5-8364-efd6f68ec1c7-kube-api-access-brnqb\") pod \"machine-config-server-njxtf\" (UID: \"449f3406-19aa-43e5-8364-efd6f68ec1c7\") " pod="openshift-machine-config-operator/machine-config-server-njxtf" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.701936 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bgpwd" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.719025 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wkfn\" (UniqueName: \"kubernetes.io/projected/8cebf393-91a7-4018-8f08-358ca7f7155b-kube-api-access-9wkfn\") pod \"ingress-canary-9bws5\" (UID: \"8cebf393-91a7-4018-8f08-358ca7f7155b\") " pod="openshift-ingress-canary/ingress-canary-9bws5" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.723683 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p58fj" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.734641 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: E0127 12:14:17.734956 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:18.234943391 +0000 UTC m=+151.039854079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.748438 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8frz\" (UniqueName: \"kubernetes.io/projected/0b5cf703-06c8-4a98-b58b-71543d23affe-kube-api-access-z8frz\") pod \"csi-hostpathplugin-v7vzv\" (UID: \"0b5cf703-06c8-4a98-b58b-71543d23affe\") " pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.750007 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gkffs" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.770334 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqb7h" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.804068 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lwzbq" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.813356 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-njxtf" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.826488 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.831635 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9bws5" Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.836036 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:17 crc kubenswrapper[4745]: E0127 12:14:17.837265 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:18.337235271 +0000 UTC m=+151.142145969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.837772 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:17 crc kubenswrapper[4745]: E0127 12:14:17.838287 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:18.338275741 +0000 UTC m=+151.143186429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.939538 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:17 crc kubenswrapper[4745]: E0127 12:14:17.939868 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:18.439852031 +0000 UTC m=+151.244762719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.973685 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jqmqz"] Jan 27 12:14:17 crc kubenswrapper[4745]: I0127 12:14:17.986957 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hv45d"] Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.011713 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86448"] Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.011783 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm"] Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.013769 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hbsbc" event={"ID":"478908d6-765e-4bd8-a3ef-3142a7641a3b","Type":"ContainerStarted","Data":"92f38c5b389b104b9805b291f8142622c2958d40dec44e8dffe0957115aae9c3"} Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.023680 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" event={"ID":"8481d31f-f701-4821-9893-5ebf45d2dcb8","Type":"ContainerStarted","Data":"9a9d608edecbd2447e88cb41653f8576f10819ac176b3420633935c74a10f58c"} Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.026332 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" event={"ID":"65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2","Type":"ContainerStarted","Data":"53643a8871fb7fe074c5e1a4915dad1176d4d39c129f07891d0398770fa39a0c"} Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.029661 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lrlf2" event={"ID":"76633ed6-00c1-4c35-aa9c-93c0867d676d","Type":"ContainerStarted","Data":"c1eacd3f1766b7ff4b4a1eaec4bd5ddaa177beb33dbde489a3745430230559e2"} Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.040931 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:18 crc kubenswrapper[4745]: E0127 12:14:18.042008 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:18.541987126 +0000 UTC m=+151.346897814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.051349 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s4vnk" event={"ID":"a4a46bed-8781-4e46-a70e-868c24144a1f","Type":"ContainerStarted","Data":"1f39bbe360a39fa092f1498df8275cd50333b08f91770816770b422b500038fc"} Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.071729 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" event={"ID":"46ec327c-832f-4a20-9b99-1aa3315c312f","Type":"ContainerStarted","Data":"31305266d1570d01864c2f7df86ea90fac3091ac25bd456054aac3c15dc7da37"} Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.122582 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5mbhc" event={"ID":"71fd83ec-fa99-4caa-a216-1f1bb2be9251","Type":"ContainerStarted","Data":"7ff90dd37d3612e3dbfb6ce678253dc31f0441f73fc7843f32570b6f7620151d"} Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.135402 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-scl78" event={"ID":"2f032c11-b3c4-45f7-be15-d5873624adcd","Type":"ContainerStarted","Data":"ab98ea4d508f88f123234ee769a5727b7f48c10efbf7412926253efc6a3475f7"} Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.141368 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4hbbw" event={"ID":"f880472d-b13f-4b62-946f-3d74aafe5743","Type":"ContainerStarted","Data":"3b7af8670f769169a7d829c329a9bbddbdbf6a85993bd19b2f2f0a5850fb1f27"} Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.141766 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:18 crc kubenswrapper[4745]: E0127 12:14:18.142349 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:18.64232789 +0000 UTC m=+151.447238578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.243603 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:18 crc kubenswrapper[4745]: E0127 12:14:18.244016 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:18.744003953 +0000 UTC m=+151.548914641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.344597 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:18 crc kubenswrapper[4745]: E0127 12:14:18.344914 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:18.844880953 +0000 UTC m=+151.649791641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.345538 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:18 crc kubenswrapper[4745]: E0127 12:14:18.346561 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:18.846551741 +0000 UTC m=+151.651462429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.450289 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:18 crc kubenswrapper[4745]: E0127 12:14:18.450692 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:18.950676954 +0000 UTC m=+151.755587642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.499147 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-44s8w"] Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.502911 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4blhs"] Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.551341 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:18 crc kubenswrapper[4745]: E0127 12:14:18.551751 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:19.051736529 +0000 UTC m=+151.856647217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.564236 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm"] Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.566194 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ndbtg"] Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.574218 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-sssvv"] Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.652639 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:18 crc kubenswrapper[4745]: E0127 12:14:18.653164 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:19.153143373 +0000 UTC m=+151.958054051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:18 crc kubenswrapper[4745]: W0127 12:14:18.692758 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8d58e84_2299_4ceb_bb86_e5e7a451b3bc.slice/crio-e2d0eb9c80a0342d60e15f9f561280329feb170a390d7d7613cc8e95efed8468 WatchSource:0}: Error finding container e2d0eb9c80a0342d60e15f9f561280329feb170a390d7d7613cc8e95efed8468: Status 404 returned error can't find the container with id e2d0eb9c80a0342d60e15f9f561280329feb170a390d7d7613cc8e95efed8468 Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.707370 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zqrwf"] Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.707738 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5mbc7"] Jan 27 12:14:18 crc kubenswrapper[4745]: W0127 12:14:18.713216 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58f20271_d6bc_42dc_8932_fe80286fecd1.slice/crio-416a249d563ac9000abe6431a99383c6c93cae464c3fae0a06403623a337c4ed WatchSource:0}: Error finding container 416a249d563ac9000abe6431a99383c6c93cae464c3fae0a06403623a337c4ed: Status 404 returned error can't find the container with id 416a249d563ac9000abe6431a99383c6c93cae464c3fae0a06403623a337c4ed Jan 27 12:14:18 crc kubenswrapper[4745]: W0127 12:14:18.719661 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6086ad74_5d02_4181_bb34_8c116409de42.slice/crio-fb2b764cfa5cc44865312b24adb2d84b4224a6eb7bdf164eea11dbc2b8419743 WatchSource:0}: Error finding container fb2b764cfa5cc44865312b24adb2d84b4224a6eb7bdf164eea11dbc2b8419743: Status 404 returned error can't find the container with id fb2b764cfa5cc44865312b24adb2d84b4224a6eb7bdf164eea11dbc2b8419743 Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.757737 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:18 crc kubenswrapper[4745]: E0127 12:14:18.758227 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:19.258214163 +0000 UTC m=+152.063124851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.760380 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ks2hk"] Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.775287 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bgpwd"] Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.785907 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-gkffs"] Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.797778 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfk2x"] Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.807056 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl"] Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.824519 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rxlbk"] Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.826059 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5tpbz"] Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.840696 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-85x5w"] Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.859943 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:18 crc kubenswrapper[4745]: E0127 12:14:18.860077 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:19.36004523 +0000 UTC m=+152.164955918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.860478 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:18 crc kubenswrapper[4745]: E0127 12:14:18.868865 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:19.368796772 +0000 UTC m=+152.173707460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:18 crc kubenswrapper[4745]: W0127 12:14:18.889354 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cd78bf5_69f3_4074_9dea_c7a459de6d4d.slice/crio-220344e56572558fd8f8fb94fecabca5e92e4b873bb794a6db80ce7ea188431a WatchSource:0}: Error finding container 220344e56572558fd8f8fb94fecabca5e92e4b873bb794a6db80ce7ea188431a: Status 404 returned error can't find the container with id 220344e56572558fd8f8fb94fecabca5e92e4b873bb794a6db80ce7ea188431a Jan 27 12:14:18 crc kubenswrapper[4745]: W0127 12:14:18.892264 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5c47589_d94b_44fb_b31f_1f4045ea9e3c.slice/crio-39dbf907678ca50b26be46a55e226b14172b525e658eee0335fd12a3d56b9716 WatchSource:0}: Error finding container 39dbf907678ca50b26be46a55e226b14172b525e658eee0335fd12a3d56b9716: Status 404 returned error can't find the container with id 39dbf907678ca50b26be46a55e226b14172b525e658eee0335fd12a3d56b9716 Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.939231 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l8dg8"] Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.958361 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4"] Jan 27 12:14:18 crc kubenswrapper[4745]: E0127 12:14:18.963008 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:19.462981849 +0000 UTC m=+152.267892527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.963199 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.963534 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:18 crc kubenswrapper[4745]: E0127 12:14:18.963965 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:19.463951747 +0000 UTC m=+152.268862435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.968471 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p58fj"] Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.971488 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zcwfv"] Jan 27 12:14:18 crc kubenswrapper[4745]: W0127 12:14:18.993123 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89c991f5_72eb_4c8e_a31f_9db5b46ffc5d.slice/crio-7eec6a19d250e127f0d4d1491000ae075518bb703ab49c85894e7e28cf0810c1 WatchSource:0}: Error finding container 7eec6a19d250e127f0d4d1491000ae075518bb703ab49c85894e7e28cf0810c1: Status 404 returned error can't find the container with id 7eec6a19d250e127f0d4d1491000ae075518bb703ab49c85894e7e28cf0810c1 Jan 27 12:14:18 crc kubenswrapper[4745]: I0127 12:14:18.996567 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-r79xk"] Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.004358 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-v7vzv"] Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.006342 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9bws5"] Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.031895 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lwzbq"] Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.064639 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:19 crc kubenswrapper[4745]: E0127 12:14:19.064853 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:19.564802676 +0000 UTC m=+152.369713364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.064998 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.065114 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.065145 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:14:19 crc kubenswrapper[4745]: E0127 12:14:19.065923 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:19.565897907 +0000 UTC m=+152.370808775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.066176 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.079449 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqb7h"] Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.079872 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.162875 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86448" event={"ID":"792e272a-64cd-47cd-8aac-eeb295e49f05","Type":"ContainerStarted","Data":"6392d061b0e9bb1de646876fe2161a486d220c85f4f65d535c9045e6ee1723bb"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.164394 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zqrwf" event={"ID":"94eb6425-bdf2-43d1-926e-c94700a985be","Type":"ContainerStarted","Data":"db5bcc70db46110ef9414198f8ac0e6c653383151c993abb3b6e5eeec72cd67c"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.165721 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p58fj" event={"ID":"58509a9e-6184-4459-9e85-f8e999f965e3","Type":"ContainerStarted","Data":"2aff9a9e46499f30752f5670fae75ee381d45d233b351b0069daeb67213d0824"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.166083 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:19 crc kubenswrapper[4745]: E0127 12:14:19.166269 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:19.666242082 +0000 UTC m=+152.471152770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.166529 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.166574 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.166641 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.166786 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8dg8" event={"ID":"b98945dd-f382-4fef-97b6-9037edd2bd9f","Type":"ContainerStarted","Data":"088d5469d0fbeb8004ff3cc009a5360446c23907d08b98c366eb2e0a44a4ddd8"} Jan 27 12:14:19 crc kubenswrapper[4745]: E0127 12:14:19.166972 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:19.666963362 +0000 UTC m=+152.471874050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.170417 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" event={"ID":"dddac23a-b546-4121-85df-7475aa7c5801","Type":"ContainerStarted","Data":"3320b7c14b8f2cb5f764c5b76e6033c5a3b4d38c788720984ac341485bd317ff"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.170647 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.171084 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.172208 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqb7h" event={"ID":"0a76652e-d0b0-449d-9e41-b363948890bf","Type":"ContainerStarted","Data":"7da53acce8bc3b8aa24b72242c62a74a2e6bee131a9cea5d1238be6d02f91886"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.174516 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xhn4h" event={"ID":"ea5a99ae-b999-419c-9da0-1333ba6378ea","Type":"ContainerStarted","Data":"f00c429c67383348e028cd301212fce7b3d3860a0f7f78095cc9c6456bd52a43"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.175391 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5tpbz" event={"ID":"59d3b2e2-c186-4551-b6b6-962b13b3a058","Type":"ContainerStarted","Data":"6425ebf3b8af4dce04f9a9c8a3d9eb12174776337440deabc053706d8e4d88c5"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.176308 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" event={"ID":"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1","Type":"ContainerStarted","Data":"9caa386630c9d78a180790c3b772b585eff7964cc4abf5f978cb505f8b857542"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.178304 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-xhn4h" Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.180914 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rxlbk" event={"ID":"b18e2d30-da7f-4c5f-9700-20c7c05b1043","Type":"ContainerStarted","Data":"16a9c85356220ab70a47c9a009c106a7b64802e84f5b39a6d81cf18dc6d12fc9"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.181906 4745 patch_prober.go:28] interesting pod/console-operator-58897d9998-xhn4h container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.182153 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xhn4h" podUID="ea5a99ae-b999-419c-9da0-1333ba6378ea" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.184310 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44s8w" event={"ID":"c8d58e84-2299-4ceb-bb86-e5e7a451b3bc","Type":"ContainerStarted","Data":"e2d0eb9c80a0342d60e15f9f561280329feb170a390d7d7613cc8e95efed8468"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.187047 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5mbc7" event={"ID":"f7f10ec2-24a7-445e-8ae2-49da5ad6cf71","Type":"ContainerStarted","Data":"583ec09f487714acb627f7987eefd4ef93525d5735590c67bfac41185b6bfd6e"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.188644 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4blhs" event={"ID":"58f20271-d6bc-42dc-8932-fe80286fecd1","Type":"ContainerStarted","Data":"416a249d563ac9000abe6431a99383c6c93cae464c3fae0a06403623a337c4ed"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.189951 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-njxtf" event={"ID":"449f3406-19aa-43e5-8364-efd6f68ec1c7","Type":"ContainerStarted","Data":"920e0c9be452182eee51283cab1f83f86c3051fa9f19a5d44c1b3a26c95ac78e"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.191593 4745 generic.go:334] "Generic (PLEG): container finished" podID="78fe56b7-5ff3-4540-bfda-efeef43859f6" containerID="b95eca4ac34cb17163dfc014197ed196ae02d3e390f3172889a170a090d230ff" exitCode=0 Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.191653 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" event={"ID":"78fe56b7-5ff3-4540-bfda-efeef43859f6","Type":"ContainerDied","Data":"b95eca4ac34cb17163dfc014197ed196ae02d3e390f3172889a170a090d230ff"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.196105 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85x5w" event={"ID":"b5c47589-d94b-44fb-b31f-1f4045ea9e3c","Type":"ContainerStarted","Data":"39dbf907678ca50b26be46a55e226b14172b525e658eee0335fd12a3d56b9716"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.198716 4745 generic.go:334] "Generic (PLEG): container finished" podID="cef45995-4242-499f-adeb-cc12aa630b5c" containerID="f94699cffc6f779b0e9b617f0210291293bc7eec04955b05f82db8b96ae2fc79" exitCode=0 Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.198773 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tf24j" event={"ID":"cef45995-4242-499f-adeb-cc12aa630b5c","Type":"ContainerDied","Data":"f94699cffc6f779b0e9b617f0210291293bc7eec04955b05f82db8b96ae2fc79"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.200893 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" event={"ID":"97d3d9df-e52f-4eb3-8034-c5ace5c23da3","Type":"ContainerStarted","Data":"1093206f9e5d5aedc2f44fd992132b1a181c914aeeb79e89a52124e0bebe788e"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.202663 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl" event={"ID":"4c71dc30-ef02-41cf-a2f8-973dfc972054","Type":"ContainerStarted","Data":"cd2ac08613d5e0a6e5d29262243c49378be666d6a4b748b41d99b8525150bcfe"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.203469 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ks2hk" event={"ID":"7d3ec1e8-113d-4dbd-9a48-6daea4c74cdf","Type":"ContainerStarted","Data":"9039f22d55c7fca6c33d46200f82ae60b708cd1d07f86452de3c0a09ed6b84a1"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.204238 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" event={"ID":"0b5cf703-06c8-4a98-b58b-71543d23affe","Type":"ContainerStarted","Data":"907f3213e14a3bed654e866e4b9e7fb8503917c52c8d2ec05b0011a7da05d0d6"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.204926 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-sssvv" event={"ID":"ada46f99-5088-4a53-b7b6-cc0d93f72412","Type":"ContainerStarted","Data":"9bc0b23775bb2c7c4af95130e78420147d29b50f1aad123696aa69bd3cd31dd8"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.205656 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" event={"ID":"5cd78bf5-69f3-4074-9dea-c7a459de6d4d","Type":"ContainerStarted","Data":"220344e56572558fd8f8fb94fecabca5e92e4b873bb794a6db80ce7ea188431a"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.206343 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jqmqz" event={"ID":"32a5e4ed-16fd-4922-ac7f-515ea14b4fe5","Type":"ContainerStarted","Data":"2bd80c22349c6b14b6f48e136f3515386e82026294d63357537e4bfeba0eddfd"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.207478 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm" event={"ID":"6086ad74-5d02-4181-bb34-8c116409de42","Type":"ContainerStarted","Data":"fb2b764cfa5cc44865312b24adb2d84b4224a6eb7bdf164eea11dbc2b8419743"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.208128 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gkffs" event={"ID":"a7db0b54-29c2-4ab4-b919-c83dcbb8f094","Type":"ContainerStarted","Data":"9e96696350484d96f01d558e8555fa598b21e994abb39e234740b39d9b30d19f"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.211048 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hv45d" event={"ID":"26625d33-dbca-4e3f-97eb-34956096bf8a","Type":"ContainerStarted","Data":"63ce922914dee5bf473b682e98e9ae9055aad106b2cf0b8ce9016228587ea5b1"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.211869 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lwzbq" event={"ID":"f6423c2b-6727-4dd3-9dc2-d4c6d1dd4ebd","Type":"ContainerStarted","Data":"1d200b6daeeec34d3377169fc67716cd58c1d4d0453b6d1ccec1b1f76e979a03"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.212570 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-r79xk" event={"ID":"199bb9ad-0a44-4631-995f-c4ef6809cd54","Type":"ContainerStarted","Data":"250d8183ddc3bc311bd0ef44926e0d92b284254362e6669addf939edd300d47c"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.213322 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9bws5" event={"ID":"8cebf393-91a7-4018-8f08-358ca7f7155b","Type":"ContainerStarted","Data":"6434f1f24511d1cf9a5ef9ce3ca9f62042cc6a07a1d4b42af0352e0baaefbea3"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.214038 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4" event={"ID":"89c991f5-72eb-4c8e-a31f-9db5b46ffc5d","Type":"ContainerStarted","Data":"7eec6a19d250e127f0d4d1491000ae075518bb703ab49c85894e7e28cf0810c1"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.214894 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bgpwd" event={"ID":"ac22d21e-ce2f-4e46-8b65-e6c84480b954","Type":"ContainerStarted","Data":"5fbabd9e7a09330737ddb612f31e74681d0e92644b7bcd669d2e17267b6a8892"} Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.215148 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hbsbc" Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.215173 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.217197 4745 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-8nsr4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.217271 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" podUID="46ec327c-832f-4a20-9b99-1aa3315c312f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.217208 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.217370 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.268528 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:19 crc kubenswrapper[4745]: E0127 12:14:19.268713 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:19.768690056 +0000 UTC m=+152.573600744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.268834 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:19 crc kubenswrapper[4745]: E0127 12:14:19.269763 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:19.769726216 +0000 UTC m=+152.574636914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.310482 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.323389 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.332019 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.369846 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:19 crc kubenswrapper[4745]: E0127 12:14:19.371113 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:19.87109861 +0000 UTC m=+152.676009298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.471576 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:19 crc kubenswrapper[4745]: E0127 12:14:19.471958 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:19.971945879 +0000 UTC m=+152.776856567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.572626 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:19 crc kubenswrapper[4745]: E0127 12:14:19.573187 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:20.073166608 +0000 UTC m=+152.878077296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.674033 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:19 crc kubenswrapper[4745]: E0127 12:14:19.674730 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:20.174713287 +0000 UTC m=+152.979623975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.775203 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:19 crc kubenswrapper[4745]: E0127 12:14:19.775592 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:20.275571996 +0000 UTC m=+153.080482684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.878086 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:19 crc kubenswrapper[4745]: E0127 12:14:19.879465 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:20.379434151 +0000 UTC m=+153.184344839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.980758 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:19 crc kubenswrapper[4745]: E0127 12:14:19.981120 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:20.481094394 +0000 UTC m=+153.286005082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:19 crc kubenswrapper[4745]: I0127 12:14:19.981178 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:19 crc kubenswrapper[4745]: E0127 12:14:19.981501 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:20.481488325 +0000 UTC m=+153.286399013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:19 crc kubenswrapper[4745]: W0127 12:14:19.987391 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-03d0148cb3eedffccef3034cb285d9d0dd68e9d47ca6c3ec604ed0a58387d6ff WatchSource:0}: Error finding container 03d0148cb3eedffccef3034cb285d9d0dd68e9d47ca6c3ec604ed0a58387d6ff: Status 404 returned error can't find the container with id 03d0148cb3eedffccef3034cb285d9d0dd68e9d47ca6c3ec604ed0a58387d6ff Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.082289 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:20 crc kubenswrapper[4745]: E0127 12:14:20.082453 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:20.582424416 +0000 UTC m=+153.387335114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.082904 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:20 crc kubenswrapper[4745]: E0127 12:14:20.083308 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:20.583295411 +0000 UTC m=+153.388206119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.143860 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" podStartSLOduration=129.143833421 podStartE2EDuration="2m9.143833421s" podCreationTimestamp="2026-01-27 12:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:20.131391204 +0000 UTC m=+152.936301902" watchObservedRunningTime="2026-01-27 12:14:20.143833421 +0000 UTC m=+152.948744109" Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.160208 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-hbsbc" podStartSLOduration=130.160185721 podStartE2EDuration="2m10.160185721s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:20.151194783 +0000 UTC m=+152.956105481" watchObservedRunningTime="2026-01-27 12:14:20.160185721 +0000 UTC m=+152.965096409" Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.184190 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:20 crc kubenswrapper[4745]: E0127 12:14:20.184592 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:20.684571681 +0000 UTC m=+153.489482369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.206183 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-xhn4h" podStartSLOduration=130.206158802 podStartE2EDuration="2m10.206158802s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:20.203763113 +0000 UTC m=+153.008673801" watchObservedRunningTime="2026-01-27 12:14:20.206158802 +0000 UTC m=+153.011069490" Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.227665 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4hbbw" event={"ID":"f880472d-b13f-4b62-946f-3d74aafe5743","Type":"ContainerStarted","Data":"94d2288273b121a2174db8e67f7dd894be33afd80991a94fd9db3b1571fd7a9c"} Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.228777 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"bf2a9d53b7ca545c471e0dd64df1ce4b27a3f6c3d832934131d8527865bdace1"} Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.231856 4745 generic.go:334] "Generic (PLEG): container finished" podID="65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2" containerID="53643a8871fb7fe074c5e1a4915dad1176d4d39c129f07891d0398770fa39a0c" exitCode=0 Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.231919 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" event={"ID":"65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2","Type":"ContainerDied","Data":"53643a8871fb7fe074c5e1a4915dad1176d4d39c129f07891d0398770fa39a0c"} Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.235231 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"03d0148cb3eedffccef3034cb285d9d0dd68e9d47ca6c3ec604ed0a58387d6ff"} Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.237615 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s4vnk" event={"ID":"a4a46bed-8781-4e46-a70e-868c24144a1f","Type":"ContainerStarted","Data":"e54ed9c65cb4e66c90bb6a1e367c89e8e604a8440096291207922186aaf21b1e"} Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.238986 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e149788cdcdd7451e0d1d74e625bb85b6be4285d0f4d03065c91ce288f59f8d0"} Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.240359 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" event={"ID":"8481d31f-f701-4821-9893-5ebf45d2dcb8","Type":"ContainerStarted","Data":"81131ad077bdba0a5434f9f7d53e0f53c7a4a0eadc71c1f4aeb1f95e403b187c"} Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.243687 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5mbhc" event={"ID":"71fd83ec-fa99-4caa-a216-1f1bb2be9251","Type":"ContainerStarted","Data":"a09de9cac394552d3a0b60404f87b777ce98738745c6e7bb7e4c2ca5bbf6651f"} Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.244215 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.244254 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.244498 4745 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-8nsr4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.244630 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" podUID="46ec327c-832f-4a20-9b99-1aa3315c312f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.247005 4745 patch_prober.go:28] interesting pod/console-operator-58897d9998-xhn4h container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.247043 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xhn4h" podUID="ea5a99ae-b999-419c-9da0-1333ba6378ea" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.286891 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:20 crc kubenswrapper[4745]: E0127 12:14:20.287271 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:20.787253423 +0000 UTC m=+153.592164111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.389204 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:20 crc kubenswrapper[4745]: E0127 12:14:20.389451 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:20.889403989 +0000 UTC m=+153.694314677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.390371 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:20 crc kubenswrapper[4745]: E0127 12:14:20.392784 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:20.892760795 +0000 UTC m=+153.697671483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.492244 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:20 crc kubenswrapper[4745]: E0127 12:14:20.492501 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:20.992453411 +0000 UTC m=+153.797364099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.492604 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:20 crc kubenswrapper[4745]: E0127 12:14:20.492950 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:20.992934625 +0000 UTC m=+153.797845523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.594615 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:20 crc kubenswrapper[4745]: E0127 12:14:20.594892 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:21.094843384 +0000 UTC m=+153.899754082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.594982 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:20 crc kubenswrapper[4745]: E0127 12:14:20.595618 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:21.095602686 +0000 UTC m=+153.900513564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.696451 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:20 crc kubenswrapper[4745]: E0127 12:14:20.697004 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:21.19697867 +0000 UTC m=+154.001889378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.797963 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:20 crc kubenswrapper[4745]: E0127 12:14:20.798359 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:21.298347683 +0000 UTC m=+154.103258371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.899638 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:20 crc kubenswrapper[4745]: E0127 12:14:20.899970 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:21.399932653 +0000 UTC m=+154.204843341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:20 crc kubenswrapper[4745]: I0127 12:14:20.900282 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:20 crc kubenswrapper[4745]: E0127 12:14:20.900754 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:21.400728646 +0000 UTC m=+154.205639374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.001573 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:21 crc kubenswrapper[4745]: E0127 12:14:21.001870 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:21.501770251 +0000 UTC m=+154.306680949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.002047 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:21 crc kubenswrapper[4745]: E0127 12:14:21.002506 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:21.502479481 +0000 UTC m=+154.307390209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.103787 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:21 crc kubenswrapper[4745]: E0127 12:14:21.104186 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:21.604156024 +0000 UTC m=+154.409066752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.104563 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:21 crc kubenswrapper[4745]: E0127 12:14:21.104937 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:21.604908995 +0000 UTC m=+154.409819783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.205606 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:21 crc kubenswrapper[4745]: E0127 12:14:21.205836 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:21.705791315 +0000 UTC m=+154.510702003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.206039 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:21 crc kubenswrapper[4745]: E0127 12:14:21.206371 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:21.706362731 +0000 UTC m=+154.511273419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.253651 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lrlf2" event={"ID":"76633ed6-00c1-4c35-aa9c-93c0867d676d","Type":"ContainerStarted","Data":"ca8cb859d32f09363bda98e6668a6e9867b767fc402382833bbbac37cb8a957f"} Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.256699 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" event={"ID":"97d3d9df-e52f-4eb3-8034-c5ace5c23da3","Type":"ContainerStarted","Data":"67a5fd03f462ea9e332f686dc1c9f276a32bdb1510ce5b533d9e2f94a7bfa2a5"} Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.259290 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jqmqz" event={"ID":"32a5e4ed-16fd-4922-ac7f-515ea14b4fe5","Type":"ContainerStarted","Data":"ccb940891ebb0f977c15485d0621d5291dad7c28c29d8ed32ae493a4c741304a"} Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.261047 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hv45d" event={"ID":"26625d33-dbca-4e3f-97eb-34956096bf8a","Type":"ContainerStarted","Data":"b1cc3c30eb02c6b537ad0e05eeff47b44ba2425894f663e20d0d7ad31b1862c8"} Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.307064 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:21 crc kubenswrapper[4745]: E0127 12:14:21.307281 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:21.807245511 +0000 UTC m=+154.612156229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.307521 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:21 crc kubenswrapper[4745]: E0127 12:14:21.307982 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:21.807963762 +0000 UTC m=+154.612874490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.408328 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:21 crc kubenswrapper[4745]: E0127 12:14:21.408539 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:21.908511812 +0000 UTC m=+154.713422510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.408742 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:21 crc kubenswrapper[4745]: E0127 12:14:21.409123 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:21.909109079 +0000 UTC m=+154.714019767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.510179 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:21 crc kubenswrapper[4745]: E0127 12:14:21.510558 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:22.010543605 +0000 UTC m=+154.815454293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.611601 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:21 crc kubenswrapper[4745]: E0127 12:14:21.611972 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:22.11195619 +0000 UTC m=+154.916866878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.712752 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:21 crc kubenswrapper[4745]: E0127 12:14:21.713072 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:22.213031855 +0000 UTC m=+155.017942573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.713303 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:21 crc kubenswrapper[4745]: E0127 12:14:21.713856 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:22.213803967 +0000 UTC m=+155.018714685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.815438 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:21 crc kubenswrapper[4745]: E0127 12:14:21.815754 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:22.315712726 +0000 UTC m=+155.120623434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.815871 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:21 crc kubenswrapper[4745]: E0127 12:14:21.816652 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:22.316633653 +0000 UTC m=+155.121544341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.917350 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:21 crc kubenswrapper[4745]: E0127 12:14:21.917589 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:22.417555604 +0000 UTC m=+155.222466292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:21 crc kubenswrapper[4745]: I0127 12:14:21.917728 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:21 crc kubenswrapper[4745]: E0127 12:14:21.918215 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:22.418196322 +0000 UTC m=+155.223107010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:22 crc kubenswrapper[4745]: I0127 12:14:22.018881 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:22 crc kubenswrapper[4745]: E0127 12:14:22.019303 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:22.519254087 +0000 UTC m=+155.324164775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:22 crc kubenswrapper[4745]: I0127 12:14:22.019408 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:22 crc kubenswrapper[4745]: E0127 12:14:22.019901 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:22.519891685 +0000 UTC m=+155.324802573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:22 crc kubenswrapper[4745]: I0127 12:14:22.120259 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:22 crc kubenswrapper[4745]: E0127 12:14:22.120789 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:22.620735604 +0000 UTC m=+155.425646292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:22 crc kubenswrapper[4745]: I0127 12:14:22.121235 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:22 crc kubenswrapper[4745]: E0127 12:14:22.122193 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:22.622166825 +0000 UTC m=+155.427077503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:22 crc kubenswrapper[4745]: I0127 12:14:22.223452 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:22 crc kubenswrapper[4745]: E0127 12:14:22.223715 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:22.723668823 +0000 UTC m=+155.528579511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:22 crc kubenswrapper[4745]: I0127 12:14:22.223939 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:22 crc kubenswrapper[4745]: E0127 12:14:22.224456 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:22.724432265 +0000 UTC m=+155.529342963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:22 crc kubenswrapper[4745]: I0127 12:14:22.268887 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bgpwd" event={"ID":"ac22d21e-ce2f-4e46-8b65-e6c84480b954","Type":"ContainerStarted","Data":"62cb0b5ed62cdc671b3ee3c7712b83c2499716361a9b12a7e16c47d5c36007ae"} Jan 27 12:14:22 crc kubenswrapper[4745]: I0127 12:14:22.271422 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-njxtf" event={"ID":"449f3406-19aa-43e5-8364-efd6f68ec1c7","Type":"ContainerStarted","Data":"2b3affd180d008da234ad9b797dd8eac27edf2625214968f47481149c137b985"} Jan 27 12:14:22 crc kubenswrapper[4745]: I0127 12:14:22.324657 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:22 crc kubenswrapper[4745]: E0127 12:14:22.325053 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:22.824996865 +0000 UTC m=+155.629907583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:22 crc kubenswrapper[4745]: I0127 12:14:22.325170 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:22 crc kubenswrapper[4745]: E0127 12:14:22.325884 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:22.82585707 +0000 UTC m=+155.630767798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:22 crc kubenswrapper[4745]: I0127 12:14:22.426891 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:22 crc kubenswrapper[4745]: E0127 12:14:22.427084 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:22.927055039 +0000 UTC m=+155.731965747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:22 crc kubenswrapper[4745]: I0127 12:14:22.427291 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:22 crc kubenswrapper[4745]: E0127 12:14:22.427962 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:22.927948694 +0000 UTC m=+155.732859392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:22 crc kubenswrapper[4745]: I0127 12:14:22.529063 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:22 crc kubenswrapper[4745]: E0127 12:14:22.529363 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:23.029337109 +0000 UTC m=+155.834247817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:22 crc kubenswrapper[4745]: I0127 12:14:22.529863 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:22 crc kubenswrapper[4745]: E0127 12:14:22.530290 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:23.030277426 +0000 UTC m=+155.835188134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:22 crc kubenswrapper[4745]: I0127 12:14:22.632566 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:22 crc kubenswrapper[4745]: E0127 12:14:22.632977 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:23.132961417 +0000 UTC m=+155.937872105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:22 crc kubenswrapper[4745]: I0127 12:14:22.735184 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:22 crc kubenswrapper[4745]: E0127 12:14:22.735622 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:23.235607517 +0000 UTC m=+156.040518205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:22 crc kubenswrapper[4745]: I0127 12:14:22.836570 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:22 crc kubenswrapper[4745]: E0127 12:14:22.836911 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:23.336896649 +0000 UTC m=+156.141807337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:22 crc kubenswrapper[4745]: I0127 12:14:22.937854 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:22 crc kubenswrapper[4745]: E0127 12:14:22.938264 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:23.438243502 +0000 UTC m=+156.243154200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.039454 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:23 crc kubenswrapper[4745]: E0127 12:14:23.039695 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:23.539648887 +0000 UTC m=+156.344559615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.141037 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:23 crc kubenswrapper[4745]: E0127 12:14:23.141343 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:23.641332519 +0000 UTC m=+156.446243207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.241797 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:23 crc kubenswrapper[4745]: E0127 12:14:23.242223 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:23.742202119 +0000 UTC m=+156.547112817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.278470 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p58fj" event={"ID":"58509a9e-6184-4459-9e85-f8e999f965e3","Type":"ContainerStarted","Data":"2da5c21f43383a965e11428f866c8ee6d073ca8ba12017aba8501dbed4412535"} Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.281281 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm" event={"ID":"6086ad74-5d02-4181-bb34-8c116409de42","Type":"ContainerStarted","Data":"60ca90d8c87883fb7d78bb2a23252d2963ddce1f3954fa70ad400f9d46849a47"} Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.285381 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5mbc7" event={"ID":"f7f10ec2-24a7-445e-8ae2-49da5ad6cf71","Type":"ContainerStarted","Data":"15071933bc991cfedf96a554c49483f26775999ae1865720263b8373d092971f"} Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.287052 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" event={"ID":"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1","Type":"ContainerStarted","Data":"d0b913a0cbf8c0c3b713fd52b0a4c1a1231240725e7e14e2ba572ac4250aab3f"} Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.289201 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-scl78" event={"ID":"2f032c11-b3c4-45f7-be15-d5873624adcd","Type":"ContainerStarted","Data":"a5a1465cfc138b9ef03ce0e9a6e4eb2c6c3266e531e153464605da5823a95d34"} Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.292549 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4blhs" event={"ID":"58f20271-d6bc-42dc-8932-fe80286fecd1","Type":"ContainerStarted","Data":"247caabd4cfff157e466a385d4b6cfb3a11736ecacf2e8d03f5118f45252e1c5"} Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.294010 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44s8w" event={"ID":"c8d58e84-2299-4ceb-bb86-e5e7a451b3bc","Type":"ContainerStarted","Data":"fa4f39fbec4fee9b5c37341d0724702c1a2f870547e67295631e763dc70699c9"} Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.295291 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl" event={"ID":"4c71dc30-ef02-41cf-a2f8-973dfc972054","Type":"ContainerStarted","Data":"5ea29f1c33a1cd930ae10d0ea4392c5bd491a23dca91b95a3893bbd68ff5516f"} Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.296341 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86448" event={"ID":"792e272a-64cd-47cd-8aac-eeb295e49f05","Type":"ContainerStarted","Data":"7112ec3256a658b1cea103fd2847a9323f831fd973b8de925c35188494a0b679"} Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.297687 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" event={"ID":"5cd78bf5-69f3-4074-9dea-c7a459de6d4d","Type":"ContainerStarted","Data":"a887876be0a0983d29839bc5e0ebb9444857efcbdcc08aa3c88f84695524ef4d"} Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.299383 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zqrwf" event={"ID":"94eb6425-bdf2-43d1-926e-c94700a985be","Type":"ContainerStarted","Data":"056962067fbe8f5fd14aa3dc9656a74ceff87c88e96809c3b997c987eea3fe33"} Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.300884 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4" event={"ID":"89c991f5-72eb-4c8e-a31f-9db5b46ffc5d","Type":"ContainerStarted","Data":"0bd4b0365b17e3b5393981721a0082388192ee99da56fbc12140386a26a711f9"} Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.302248 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ks2hk" event={"ID":"7d3ec1e8-113d-4dbd-9a48-6daea4c74cdf","Type":"ContainerStarted","Data":"ba3ce52f7d33501357071e87a10ab4b4218186df9c80ce41fa078849a5ff711b"} Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.303378 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rxlbk" event={"ID":"b18e2d30-da7f-4c5f-9700-20c7c05b1043","Type":"ContainerStarted","Data":"c3278cfd9054bc5cc9fe5b00475844bbe0f69319c708499a1a79e7139b5d281e"} Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.305068 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gkffs" event={"ID":"a7db0b54-29c2-4ab4-b919-c83dcbb8f094","Type":"ContainerStarted","Data":"dae6f832a65e62698502d682661869b95c8fa1d81962bae76320e63817bc1c2c"} Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.306170 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85x5w" event={"ID":"b5c47589-d94b-44fb-b31f-1f4045ea9e3c","Type":"ContainerStarted","Data":"99cf03506ff3d60eeed30e02d9da2be9442aee7ac2169cf530e38253300de1da"} Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.307631 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-sssvv" event={"ID":"ada46f99-5088-4a53-b7b6-cc0d93f72412","Type":"ContainerStarted","Data":"c6956f79ea937ccc94aa236f39cc4d6e86902612cccdf8ec13b30f80689dc317"} Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.325178 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-5mbhc" podStartSLOduration=133.325106662 podStartE2EDuration="2m13.325106662s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:23.322232689 +0000 UTC m=+156.127143377" watchObservedRunningTime="2026-01-27 12:14:23.325106662 +0000 UTC m=+156.130017370" Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.339655 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" podStartSLOduration=133.339638439 podStartE2EDuration="2m13.339638439s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:23.33792397 +0000 UTC m=+156.142834678" watchObservedRunningTime="2026-01-27 12:14:23.339638439 +0000 UTC m=+156.144549127" Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.344924 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:23 crc kubenswrapper[4745]: E0127 12:14:23.345357 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:23.845328423 +0000 UTC m=+156.650239121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.417564 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5mbhc" Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.419160 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.419215 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.445867 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:23 crc kubenswrapper[4745]: E0127 12:14:23.446181 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:23.946144561 +0000 UTC m=+156.751055279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:23 crc kubenswrapper[4745]: E0127 12:14:23.548577 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:24.048541654 +0000 UTC m=+156.853452372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.548134 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.649466 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:23 crc kubenswrapper[4745]: E0127 12:14:23.649799 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:24.149771304 +0000 UTC m=+156.954681992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.649935 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:23 crc kubenswrapper[4745]: E0127 12:14:23.650329 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:24.150306279 +0000 UTC m=+156.955216987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.750517 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:23 crc kubenswrapper[4745]: E0127 12:14:23.750770 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:24.250736995 +0000 UTC m=+157.055647713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.751087 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:23 crc kubenswrapper[4745]: E0127 12:14:23.751645 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:24.25161411 +0000 UTC m=+157.056524828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.852186 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:23 crc kubenswrapper[4745]: E0127 12:14:23.852462 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:24.352433758 +0000 UTC m=+157.157344486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.852630 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:23 crc kubenswrapper[4745]: E0127 12:14:23.853189 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:24.353164319 +0000 UTC m=+157.158075047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:23 crc kubenswrapper[4745]: I0127 12:14:23.957745 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:23 crc kubenswrapper[4745]: E0127 12:14:23.958431 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:24.458404954 +0000 UTC m=+157.263315682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.059828 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:24 crc kubenswrapper[4745]: E0127 12:14:24.060275 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:24.560249081 +0000 UTC m=+157.365159809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.161318 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:24 crc kubenswrapper[4745]: E0127 12:14:24.161802 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:24.66178762 +0000 UTC m=+157.466698308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.262870 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:24 crc kubenswrapper[4745]: E0127 12:14:24.263219 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:24.763203215 +0000 UTC m=+157.568113903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.313219 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tf24j" event={"ID":"cef45995-4242-499f-adeb-cc12aa630b5c","Type":"ContainerStarted","Data":"80448bd8bdd7db3e9be4a9a164b7fb4ed9be93d2c24fbbc073e4fbdb85200d3c"} Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.314517 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9bws5" event={"ID":"8cebf393-91a7-4018-8f08-358ca7f7155b","Type":"ContainerStarted","Data":"8ec439ad3ff7614dc854a75ac993ffee59e7f8a5b85f08badbac644bf9d77d5c"} Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.315730 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"39528b2ad2e743e352b84f80228c67c9a12c989460ae33796ee2392f428e0e49"} Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.317896 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" event={"ID":"78fe56b7-5ff3-4540-bfda-efeef43859f6","Type":"ContainerStarted","Data":"53a58532b409aee7ffb7830336a03c07afe05144218afca0b84ac9c0182681cb"} Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.319171 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8dg8" event={"ID":"b98945dd-f382-4fef-97b6-9037edd2bd9f","Type":"ContainerStarted","Data":"fd6df881deb65537e3f0f05d0d02bee9dad04d907dfc03a115bfe3d9bcbc2e0f"} Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.320340 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"004af8ed6b063bd5ee5601e322784c784d3f24152e19436f112168188758451d"} Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.321306 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ccb162eba761997775836edbdf021ca8c60ccdeb633db9ce8984cfe0bc19e0b4"} Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.322704 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" event={"ID":"65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2","Type":"ContainerStarted","Data":"51721204e711d30714f91c160861fcbe9e1f7a2cd59dc67afd2147f0d9d2efab"} Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.324028 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lwzbq" event={"ID":"f6423c2b-6727-4dd3-9dc2-d4c6d1dd4ebd","Type":"ContainerStarted","Data":"0a2a0b364e0949e466c07b88a5da4a0dbbd923b11ff502362760b7f190bb752e"} Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.324946 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-r79xk" event={"ID":"199bb9ad-0a44-4631-995f-c4ef6809cd54","Type":"ContainerStarted","Data":"25d0af1fc89b4ae73d7b09da96d18daf5e2e87ebf8e4444832f32c2fe33e0d2b"} Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.326006 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" event={"ID":"dddac23a-b546-4121-85df-7475aa7c5801","Type":"ContainerStarted","Data":"1cd9775d946ada79b2481bfbd66f589c1ad1ba7b4e680435fca820d9fe990113"} Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.327131 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5tpbz" event={"ID":"59d3b2e2-c186-4551-b6b6-962b13b3a058","Type":"ContainerStarted","Data":"d3f1c4e88379985ce05c9cc9c1a8cfe70e42d597ec29ff8f0a8a74193f668390"} Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.328289 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqb7h" event={"ID":"0a76652e-d0b0-449d-9e41-b363948890bf","Type":"ContainerStarted","Data":"d5c6879c473a4115135a7c887d5b09eeb247a3ef073fe6cbb90304f71db810e4"} Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.328615 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.342966 4745 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7xjzm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.343012 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" podUID="97d3d9df-e52f-4eb3-8034-c5ace5c23da3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.355920 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" podStartSLOduration=133.35590073 podStartE2EDuration="2m13.35590073s" podCreationTimestamp="2026-01-27 12:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:24.353027107 +0000 UTC m=+157.157937795" watchObservedRunningTime="2026-01-27 12:14:24.35590073 +0000 UTC m=+157.160811408" Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.363955 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:24 crc kubenswrapper[4745]: E0127 12:14:24.364148 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:24.864121156 +0000 UTC m=+157.669031844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.364471 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:24 crc kubenswrapper[4745]: E0127 12:14:24.365063 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:24.865048753 +0000 UTC m=+157.669959441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.376339 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jqmqz" podStartSLOduration=134.376318687 podStartE2EDuration="2m14.376318687s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:24.373914268 +0000 UTC m=+157.178824966" watchObservedRunningTime="2026-01-27 12:14:24.376318687 +0000 UTC m=+157.181229375" Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.419306 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.419650 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.465730 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:24 crc kubenswrapper[4745]: E0127 12:14:24.465961 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:24.965934853 +0000 UTC m=+157.770845541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.466005 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:24 crc kubenswrapper[4745]: E0127 12:14:24.466368 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:24.966356435 +0000 UTC m=+157.771267123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.567324 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:24 crc kubenswrapper[4745]: E0127 12:14:24.567775 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:25.067752659 +0000 UTC m=+157.872663357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.670676 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:24 crc kubenswrapper[4745]: E0127 12:14:24.671046 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:25.171031638 +0000 UTC m=+157.975942346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.771274 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:24 crc kubenswrapper[4745]: E0127 12:14:24.771479 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:25.271441544 +0000 UTC m=+158.076352262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.771725 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:24 crc kubenswrapper[4745]: E0127 12:14:24.772098 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:25.272082452 +0000 UTC m=+158.076993140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.872367 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:24 crc kubenswrapper[4745]: E0127 12:14:24.872541 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:25.372504929 +0000 UTC m=+158.177415637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.872679 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:24 crc kubenswrapper[4745]: E0127 12:14:24.873060 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:25.373046335 +0000 UTC m=+158.177957033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:24 crc kubenswrapper[4745]: I0127 12:14:24.973563 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:24 crc kubenswrapper[4745]: E0127 12:14:24.973803 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:25.473787561 +0000 UTC m=+158.278698249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.075137 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:25 crc kubenswrapper[4745]: E0127 12:14:25.075561 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:25.575542386 +0000 UTC m=+158.380453084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.176235 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:25 crc kubenswrapper[4745]: E0127 12:14:25.176442 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:25.676408575 +0000 UTC m=+158.481319283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.176741 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:25 crc kubenswrapper[4745]: E0127 12:14:25.177259 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:25.677237409 +0000 UTC m=+158.482148137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.277640 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:25 crc kubenswrapper[4745]: E0127 12:14:25.277795 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:25.777775739 +0000 UTC m=+158.582686427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.278192 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:25 crc kubenswrapper[4745]: E0127 12:14:25.278496 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:25.778488909 +0000 UTC m=+158.583399597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.333487 4745 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7xjzm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.333700 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" podUID="97d3d9df-e52f-4eb3-8034-c5ace5c23da3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.346489 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bgpwd" podStartSLOduration=134.346469843 podStartE2EDuration="2m14.346469843s" podCreationTimestamp="2026-01-27 12:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:25.345908607 +0000 UTC m=+158.150819295" watchObservedRunningTime="2026-01-27 12:14:25.346469843 +0000 UTC m=+158.151380531" Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.379262 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:25 crc kubenswrapper[4745]: E0127 12:14:25.379505 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:25.879473362 +0000 UTC m=+158.684384050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.379823 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:25 crc kubenswrapper[4745]: E0127 12:14:25.380123 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:25.88011544 +0000 UTC m=+158.685026118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.448111 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:25 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:25 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:25 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.448185 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.480499 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:25 crc kubenswrapper[4745]: E0127 12:14:25.480770 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:25.980729212 +0000 UTC m=+158.785639920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.480860 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:25 crc kubenswrapper[4745]: E0127 12:14:25.481294 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:25.981279828 +0000 UTC m=+158.786190686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.582424 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:25 crc kubenswrapper[4745]: E0127 12:14:25.582572 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:26.082546459 +0000 UTC m=+158.887457147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.582963 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:25 crc kubenswrapper[4745]: E0127 12:14:25.583301 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:26.08329329 +0000 UTC m=+158.888203978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.690158 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:25 crc kubenswrapper[4745]: E0127 12:14:25.690347 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:26.190316167 +0000 UTC m=+158.995226855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.690803 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:25 crc kubenswrapper[4745]: E0127 12:14:25.691451 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:26.191439279 +0000 UTC m=+158.996349977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.792155 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:25 crc kubenswrapper[4745]: E0127 12:14:25.792402 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:26.29237001 +0000 UTC m=+159.097280708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.793145 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:25 crc kubenswrapper[4745]: E0127 12:14:25.793516 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:26.293505803 +0000 UTC m=+159.098416511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.894355 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:25 crc kubenswrapper[4745]: E0127 12:14:25.894874 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:26.394856386 +0000 UTC m=+159.199767084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:25 crc kubenswrapper[4745]: I0127 12:14:25.996424 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:25 crc kubenswrapper[4745]: E0127 12:14:25.997033 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:26.497017212 +0000 UTC m=+159.301927910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.097408 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:26 crc kubenswrapper[4745]: E0127 12:14:26.097597 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:26.597570763 +0000 UTC m=+159.402481451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.097736 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:26 crc kubenswrapper[4745]: E0127 12:14:26.098069 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:26.598058747 +0000 UTC m=+159.402969435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.198372 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:26 crc kubenswrapper[4745]: E0127 12:14:26.198650 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:26.698625267 +0000 UTC m=+159.503535965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.198735 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:26 crc kubenswrapper[4745]: E0127 12:14:26.199237 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:26.699222295 +0000 UTC m=+159.504132993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.299521 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:26 crc kubenswrapper[4745]: E0127 12:14:26.299963 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:26.79994701 +0000 UTC m=+159.604857698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.339598 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4hbbw" event={"ID":"f880472d-b13f-4b62-946f-3d74aafe5743","Type":"ContainerStarted","Data":"d0cd3de4cd0b8f336890e5d716fdaa2310e2981e80d0a49f9a17e8b3d1367b8e"} Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.343749 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.343771 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.343781 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.343792 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.343902 4745 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-6wlnl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.343928 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl" podUID="4c71dc30-ef02-41cf-a2f8-973dfc972054" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.343963 4745 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-skmp4 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.343963 4745 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jfk2x container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.344021 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" podUID="5cd78bf5-69f3-4074-9dea-c7a459de6d4d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.344059 4745 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-ndbtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.344078 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4" podUID="89c991f5-72eb-4c8e-a31f-9db5b46ffc5d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.344085 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" podUID="1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.401479 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.479871 4745 patch_prober.go:28] interesting pod/console-operator-58897d9998-xhn4h container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.479940 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-xhn4h" podUID="ea5a99ae-b999-419c-9da0-1333ba6378ea" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 12:14:26 crc kubenswrapper[4745]: E0127 12:14:26.484732 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:26.984709621 +0000 UTC m=+159.789620309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.485867 4745 patch_prober.go:28] interesting pod/console-operator-58897d9998-xhn4h container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.485908 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xhn4h" podUID="ea5a99ae-b999-419c-9da0-1333ba6378ea" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.486517 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hv45d" podStartSLOduration=136.486487532 podStartE2EDuration="2m16.486487532s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:26.480534431 +0000 UTC m=+159.285445119" watchObservedRunningTime="2026-01-27 12:14:26.486487532 +0000 UTC m=+159.291398220" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.490236 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-sssvv" podStartSLOduration=136.490213799 podStartE2EDuration="2m16.490213799s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:26.360938173 +0000 UTC m=+159.165848871" watchObservedRunningTime="2026-01-27 12:14:26.490213799 +0000 UTC m=+159.295124497" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.493877 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:26 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:26 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:26 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.493927 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.496353 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.524300 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.524360 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.536725 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.536835 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.538541 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p58fj" podStartSLOduration=136.538513397 podStartE2EDuration="2m16.538513397s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:26.521409626 +0000 UTC m=+159.326320314" watchObservedRunningTime="2026-01-27 12:14:26.538513397 +0000 UTC m=+159.343424085" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.549996 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-zqrwf" podStartSLOduration=136.549977097 podStartE2EDuration="2m16.549977097s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:26.548078372 +0000 UTC m=+159.352989080" watchObservedRunningTime="2026-01-27 12:14:26.549977097 +0000 UTC m=+159.354887795" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.573879 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:26 crc kubenswrapper[4745]: E0127 12:14:26.574493 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:27.074469151 +0000 UTC m=+159.879379839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.577046 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:26 crc kubenswrapper[4745]: E0127 12:14:26.579190 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:27.079148376 +0000 UTC m=+159.884059154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.599838 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm" podStartSLOduration=136.599647705 podStartE2EDuration="2m16.599647705s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:26.596633478 +0000 UTC m=+159.401544166" watchObservedRunningTime="2026-01-27 12:14:26.599647705 +0000 UTC m=+159.404558393" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.613628 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-njxtf" podStartSLOduration=12.613610106 podStartE2EDuration="12.613610106s" podCreationTimestamp="2026-01-27 12:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:26.613142793 +0000 UTC m=+159.418053511" watchObservedRunningTime="2026-01-27 12:14:26.613610106 +0000 UTC m=+159.418520794" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.672177 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86448" podStartSLOduration=136.672156269 podStartE2EDuration="2m16.672156269s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:26.636894805 +0000 UTC m=+159.441805493" watchObservedRunningTime="2026-01-27 12:14:26.672156269 +0000 UTC m=+159.477066957" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.678212 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:26 crc kubenswrapper[4745]: E0127 12:14:26.678466 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:27.17845328 +0000 UTC m=+159.983363968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.687183 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4" podStartSLOduration=135.68716312 podStartE2EDuration="2m15.68716312s" podCreationTimestamp="2026-01-27 12:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:26.673834027 +0000 UTC m=+159.478744715" watchObservedRunningTime="2026-01-27 12:14:26.68716312 +0000 UTC m=+159.492073808" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.730712 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" podStartSLOduration=135.730696932 podStartE2EDuration="2m15.730696932s" podCreationTimestamp="2026-01-27 12:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:26.730011212 +0000 UTC m=+159.534921900" watchObservedRunningTime="2026-01-27 12:14:26.730696932 +0000 UTC m=+159.535607610" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.732527 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" podStartSLOduration=136.732520104 podStartE2EDuration="2m16.732520104s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:26.704569191 +0000 UTC m=+159.509479879" watchObservedRunningTime="2026-01-27 12:14:26.732520104 +0000 UTC m=+159.537430792" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.756491 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl" podStartSLOduration=135.756474613 podStartE2EDuration="2m15.756474613s" podCreationTimestamp="2026-01-27 12:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:26.75639197 +0000 UTC m=+159.561302668" watchObservedRunningTime="2026-01-27 12:14:26.756474613 +0000 UTC m=+159.561385301" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.779038 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:26 crc kubenswrapper[4745]: E0127 12:14:26.779471 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:27.279455333 +0000 UTC m=+160.084366031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.785026 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-4blhs" podStartSLOduration=135.78490474 podStartE2EDuration="2m15.78490474s" podCreationTimestamp="2026-01-27 12:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:26.7810871 +0000 UTC m=+159.585997788" watchObservedRunningTime="2026-01-27 12:14:26.78490474 +0000 UTC m=+159.589815428" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.796053 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rxlbk" podStartSLOduration=136.79603207 podStartE2EDuration="2m16.79603207s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:26.795443653 +0000 UTC m=+159.600354371" watchObservedRunningTime="2026-01-27 12:14:26.79603207 +0000 UTC m=+159.600942758" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.881664 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:26 crc kubenswrapper[4745]: E0127 12:14:26.882116 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:27.382101994 +0000 UTC m=+160.187012682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.949547 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.983163 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:26 crc kubenswrapper[4745]: E0127 12:14:26.983554 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:27.483535829 +0000 UTC m=+160.288446517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.993000 4745 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-ndbtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 27 12:14:26 crc kubenswrapper[4745]: I0127 12:14:26.993051 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" podUID="1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.084876 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:27 crc kubenswrapper[4745]: E0127 12:14:27.085038 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:27.585008116 +0000 UTC m=+160.389918804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.085335 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:27 crc kubenswrapper[4745]: E0127 12:14:27.085739 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:27.585717316 +0000 UTC m=+160.390628184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.188355 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:27 crc kubenswrapper[4745]: E0127 12:14:27.188540 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:27.688509951 +0000 UTC m=+160.493420639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.189766 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:27 crc kubenswrapper[4745]: E0127 12:14:27.190328 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:27.690305982 +0000 UTC m=+160.495216670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.197970 4745 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7xjzm container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.198039 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" podUID="97d3d9df-e52f-4eb3-8034-c5ace5c23da3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.198125 4745 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7xjzm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.198189 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" podUID="97d3d9df-e52f-4eb3-8034-c5ace5c23da3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.290940 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:27 crc kubenswrapper[4745]: E0127 12:14:27.291229 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:27.791178842 +0000 UTC m=+160.596089560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.291303 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:27 crc kubenswrapper[4745]: E0127 12:14:27.291637 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:27.791628665 +0000 UTC m=+160.596539353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.315881 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.315980 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.317259 4745 patch_prober.go:28] interesting pod/console-f9d7485db-zqrwf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.317323 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zqrwf" podUID="94eb6425-bdf2-43d1-926e-c94700a985be" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.360535 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lrlf2" event={"ID":"76633ed6-00c1-4c35-aa9c-93c0867d676d","Type":"ContainerStarted","Data":"1eb169d84a4f673862465eff03550276452bf8c8a713e4ed0bde283ca3a79daf"} Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.371779 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ks2hk" event={"ID":"7d3ec1e8-113d-4dbd-9a48-6daea4c74cdf","Type":"ContainerStarted","Data":"16c87a3d0f721a8919325515368ab920b044f01a3d34821f064875bfe22b0257"} Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.372948 4745 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-6wlnl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.372950 4745 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-ndbtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.373038 4745 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-skmp4 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.373060 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" podUID="1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.372998 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl" podUID="4c71dc30-ef02-41cf-a2f8-973dfc972054" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.373076 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4" podUID="89c991f5-72eb-4c8e-a31f-9db5b46ffc5d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.373744 4745 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jfk2x container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.373772 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" podUID="5cd78bf5-69f3-4074-9dea-c7a459de6d4d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.392028 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:27 crc kubenswrapper[4745]: E0127 12:14:27.392527 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:27.892505733 +0000 UTC m=+160.697416431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.404942 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" podStartSLOduration=137.40491682 podStartE2EDuration="2m17.40491682s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:27.402367337 +0000 UTC m=+160.207278025" watchObservedRunningTime="2026-01-27 12:14:27.40491682 +0000 UTC m=+160.209827508" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.418184 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5mbhc" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.422860 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:27 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:27 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:27 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.423173 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.423324 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" podStartSLOduration=136.423304749 podStartE2EDuration="2m16.423304749s" podCreationTimestamp="2026-01-27 12:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:27.420353484 +0000 UTC m=+160.225264172" watchObservedRunningTime="2026-01-27 12:14:27.423304749 +0000 UTC m=+160.228215457" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.436660 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-9bws5" podStartSLOduration=13.436645302 podStartE2EDuration="13.436645302s" podCreationTimestamp="2026-01-27 12:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:27.435310764 +0000 UTC m=+160.240221452" watchObservedRunningTime="2026-01-27 12:14:27.436645302 +0000 UTC m=+160.241555990" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.450688 4745 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jfk2x container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.450988 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" podUID="5cd78bf5-69f3-4074-9dea-c7a459de6d4d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.450760 4745 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jfk2x container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.451442 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" podUID="5cd78bf5-69f3-4074-9dea-c7a459de6d4d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.496183 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nqb7h" podStartSLOduration=137.496155223 podStartE2EDuration="2m17.496155223s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:27.490684545 +0000 UTC m=+160.295595223" watchObservedRunningTime="2026-01-27 12:14:27.496155223 +0000 UTC m=+160.301065911" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.500056 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:27 crc kubenswrapper[4745]: E0127 12:14:27.501247 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:28.001228898 +0000 UTC m=+160.806139587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.521697 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5tpbz" podStartSLOduration=137.521675276 podStartE2EDuration="2m17.521675276s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:27.520215594 +0000 UTC m=+160.325126282" watchObservedRunningTime="2026-01-27 12:14:27.521675276 +0000 UTC m=+160.326585974" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.606724 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:27 crc kubenswrapper[4745]: E0127 12:14:27.606868 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:28.106850364 +0000 UTC m=+160.911761052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.607015 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:27 crc kubenswrapper[4745]: E0127 12:14:27.607402 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:28.10739398 +0000 UTC m=+160.912304668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.610097 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.613990 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-zcwfv" podStartSLOduration=137.613955089 podStartE2EDuration="2m17.613955089s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:27.61294619 +0000 UTC m=+160.417856878" watchObservedRunningTime="2026-01-27 12:14:27.613955089 +0000 UTC m=+160.418865777" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.631464 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l8dg8" podStartSLOduration=136.631445691 podStartE2EDuration="2m16.631445691s" podCreationTimestamp="2026-01-27 12:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:27.627628722 +0000 UTC m=+160.432539420" watchObservedRunningTime="2026-01-27 12:14:27.631445691 +0000 UTC m=+160.436356379" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.682264 4745 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-skmp4 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.682262 4745 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-skmp4 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.682327 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4" podUID="89c991f5-72eb-4c8e-a31f-9db5b46ffc5d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.682380 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4" podUID="89c991f5-72eb-4c8e-a31f-9db5b46ffc5d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.696305 4745 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-6wlnl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.696344 4745 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-6wlnl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.696399 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl" podUID="4c71dc30-ef02-41cf-a2f8-973dfc972054" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.696394 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl" podUID="4c71dc30-ef02-41cf-a2f8-973dfc972054" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.708562 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:27 crc kubenswrapper[4745]: E0127 12:14:27.708792 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:28.208758854 +0000 UTC m=+161.013669552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.709091 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:27 crc kubenswrapper[4745]: E0127 12:14:27.709715 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:28.209694381 +0000 UTC m=+161.014605059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.811151 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:27 crc kubenswrapper[4745]: E0127 12:14:27.811361 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:28.311328952 +0000 UTC m=+161.116239640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.811485 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:27 crc kubenswrapper[4745]: E0127 12:14:27.811863 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:28.311853117 +0000 UTC m=+161.116764035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.912680 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:27 crc kubenswrapper[4745]: E0127 12:14:27.912880 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:28.41285748 +0000 UTC m=+161.217768178 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:27 crc kubenswrapper[4745]: I0127 12:14:27.913610 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:27 crc kubenswrapper[4745]: E0127 12:14:27.914019 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:28.414007823 +0000 UTC m=+161.218918511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.014940 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:28 crc kubenswrapper[4745]: E0127 12:14:28.015226 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:28.515211502 +0000 UTC m=+161.320122190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.116227 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:28 crc kubenswrapper[4745]: E0127 12:14:28.116541 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:28.616528415 +0000 UTC m=+161.421439103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.217512 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:28 crc kubenswrapper[4745]: E0127 12:14:28.217786 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:28.717744264 +0000 UTC m=+161.522654952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.218111 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:28 crc kubenswrapper[4745]: E0127 12:14:28.218416 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:28.718407813 +0000 UTC m=+161.523318501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.319487 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:28 crc kubenswrapper[4745]: E0127 12:14:28.319674 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:28.819647513 +0000 UTC m=+161.624558201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.320049 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:28 crc kubenswrapper[4745]: E0127 12:14:28.320428 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:28.820414245 +0000 UTC m=+161.625324933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.377717 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-scl78" event={"ID":"2f032c11-b3c4-45f7-be15-d5873624adcd","Type":"ContainerStarted","Data":"03c561194943d9fa4f575265516ffeb85c28d84ec29f3df2e96e15373b2d04dc"} Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.379722 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5mbc7" event={"ID":"f7f10ec2-24a7-445e-8ae2-49da5ad6cf71","Type":"ContainerStarted","Data":"db87818e2a576cb3f9113dd56ef1327ea77620d080c2a47709826ee372c10287"} Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.381423 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44s8w" event={"ID":"c8d58e84-2299-4ceb-bb86-e5e7a451b3bc","Type":"ContainerStarted","Data":"3aff20d33236d3ff3b7800af57e819b091dd6a8731e715931a7560c6041325ec"} Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.383229 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s4vnk" event={"ID":"a4a46bed-8781-4e46-a70e-868c24144a1f","Type":"ContainerStarted","Data":"6a3f32e1ef17a44f588bf4ec4584d73a059808a86ed504e256d6739256265dc9"} Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.384980 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lwzbq" event={"ID":"f6423c2b-6727-4dd3-9dc2-d4c6d1dd4ebd","Type":"ContainerStarted","Data":"8a12ed578bd864054332303cd2e08b9f089d3397ce9a812d1b9e8e71ad50c9fe"} Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.386550 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gkffs" event={"ID":"a7db0b54-29c2-4ab4-b919-c83dcbb8f094","Type":"ContainerStarted","Data":"e19705150518c10aa7e48e8c64b44ac9a5a813fa76e86243db883499d5aa2999"} Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.388217 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-r79xk" event={"ID":"199bb9ad-0a44-4631-995f-c4ef6809cd54","Type":"ContainerStarted","Data":"4144b5834e4363a16cf2f357679d29215b29a815bc953cf4573c9d005594a540"} Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.390307 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tf24j" event={"ID":"cef45995-4242-499f-adeb-cc12aa630b5c","Type":"ContainerStarted","Data":"789cebf5f40e3c02d18001d6e792d9132206a13259f8cabe979f389122abaf25"} Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.391956 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85x5w" event={"ID":"b5c47589-d94b-44fb-b31f-1f4045ea9e3c","Type":"ContainerStarted","Data":"e5c1eb04ee88d6a872883cad7c55bee53e9e7e9306ccfacd09f74ae46a1ea43f"} Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.393233 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" event={"ID":"0b5cf703-06c8-4a98-b58b-71543d23affe","Type":"ContainerStarted","Data":"2ded684abb665c7402888832fcc2655a11983709657499cf636360fd04f82c34"} Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.398900 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s4vnk" podStartSLOduration=138.39888155 podStartE2EDuration="2m18.39888155s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:28.398749576 +0000 UTC m=+161.203660254" watchObservedRunningTime="2026-01-27 12:14:28.39888155 +0000 UTC m=+161.203792248" Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.420775 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:28 crc kubenswrapper[4745]: E0127 12:14:28.421236 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:28.921210712 +0000 UTC m=+161.726121470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.421249 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:28 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:28 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:28 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.421297 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.434634 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ks2hk" podStartSLOduration=137.434617098 podStartE2EDuration="2m17.434617098s" podCreationTimestamp="2026-01-27 12:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:28.412505252 +0000 UTC m=+161.217415940" watchObservedRunningTime="2026-01-27 12:14:28.434617098 +0000 UTC m=+161.239527786" Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.438381 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lrlf2" podStartSLOduration=138.438363155 podStartE2EDuration="2m18.438363155s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:28.431603201 +0000 UTC m=+161.236513889" watchObservedRunningTime="2026-01-27 12:14:28.438363155 +0000 UTC m=+161.243273853" Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.452899 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-4hbbw" podStartSLOduration=137.452880223 podStartE2EDuration="2m17.452880223s" podCreationTimestamp="2026-01-27 12:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:28.448907428 +0000 UTC m=+161.253818116" watchObservedRunningTime="2026-01-27 12:14:28.452880223 +0000 UTC m=+161.257790911" Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.511492 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.513879 4745 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-l2frr container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.513917 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" podUID="65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.513934 4745 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-l2frr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.513984 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" podUID="65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.514334 4745 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-l2frr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.514359 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" podUID="65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.523753 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:28 crc kubenswrapper[4745]: E0127 12:14:28.524097 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:29.024082579 +0000 UTC m=+161.828993267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.625217 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:28 crc kubenswrapper[4745]: E0127 12:14:28.625380 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:29.12536104 +0000 UTC m=+161.930271738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.625648 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:28 crc kubenswrapper[4745]: E0127 12:14:28.625981 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:29.125971358 +0000 UTC m=+161.930882046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.726846 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:28 crc kubenswrapper[4745]: E0127 12:14:28.727223 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:29.227204058 +0000 UTC m=+162.032114756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.827836 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:28 crc kubenswrapper[4745]: E0127 12:14:28.828164 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:29.328145609 +0000 UTC m=+162.133056297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.930209 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:28 crc kubenswrapper[4745]: E0127 12:14:28.930380 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:29.430350297 +0000 UTC m=+162.235260985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:28 crc kubenswrapper[4745]: I0127 12:14:28.930447 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:28 crc kubenswrapper[4745]: E0127 12:14:28.930726 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:29.430712797 +0000 UTC m=+162.235623485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.031956 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:29 crc kubenswrapper[4745]: E0127 12:14:29.032097 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:29.532079981 +0000 UTC m=+162.336990659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.032182 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:29 crc kubenswrapper[4745]: E0127 12:14:29.032472 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:29.532463802 +0000 UTC m=+162.337374490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.136498 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:29 crc kubenswrapper[4745]: E0127 12:14:29.136951 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:29.636917504 +0000 UTC m=+162.441828222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.237539 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:29 crc kubenswrapper[4745]: E0127 12:14:29.237895 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:29.737881876 +0000 UTC m=+162.542792574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.332988 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.338996 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:29 crc kubenswrapper[4745]: E0127 12:14:29.339214 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:29.839183488 +0000 UTC m=+162.644094186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.339412 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:29 crc kubenswrapper[4745]: E0127 12:14:29.339796 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:29.839778995 +0000 UTC m=+162.644689693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.415947 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-r79xk" podStartSLOduration=138.415930534 podStartE2EDuration="2m18.415930534s" podCreationTimestamp="2026-01-27 12:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:29.414171823 +0000 UTC m=+162.219082531" watchObservedRunningTime="2026-01-27 12:14:29.415930534 +0000 UTC m=+162.220841222" Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.422012 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:29 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:29 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:29 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.422350 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.431670 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-85x5w" podStartSLOduration=139.431653046 podStartE2EDuration="2m19.431653046s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:29.430373679 +0000 UTC m=+162.235284387" watchObservedRunningTime="2026-01-27 12:14:29.431653046 +0000 UTC m=+162.236563734" Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.440949 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:29 crc kubenswrapper[4745]: E0127 12:14:29.441148 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:29.941121378 +0000 UTC m=+162.746032066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.441193 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:29 crc kubenswrapper[4745]: E0127 12:14:29.441653 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:29.941642203 +0000 UTC m=+162.746552891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.464873 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.465563 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.470575 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.470945 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.473899 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.542005 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:29 crc kubenswrapper[4745]: E0127 12:14:29.542204 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:30.042183673 +0000 UTC m=+162.847094371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.542389 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:29 crc kubenswrapper[4745]: E0127 12:14:29.542946 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:30.042930254 +0000 UTC m=+162.847840942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.643061 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.643301 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/accaeefe-6bc4-4802-a2c1-f356d6c57222-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"accaeefe-6bc4-4802-a2c1-f356d6c57222\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.643392 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/accaeefe-6bc4-4802-a2c1-f356d6c57222-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"accaeefe-6bc4-4802-a2c1-f356d6c57222\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 12:14:29 crc kubenswrapper[4745]: E0127 12:14:29.643519 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:30.143501075 +0000 UTC m=+162.948411763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.744860 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/accaeefe-6bc4-4802-a2c1-f356d6c57222-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"accaeefe-6bc4-4802-a2c1-f356d6c57222\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.744914 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/accaeefe-6bc4-4802-a2c1-f356d6c57222-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"accaeefe-6bc4-4802-a2c1-f356d6c57222\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.744979 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.744980 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/accaeefe-6bc4-4802-a2c1-f356d6c57222-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"accaeefe-6bc4-4802-a2c1-f356d6c57222\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 12:14:29 crc kubenswrapper[4745]: E0127 12:14:29.745285 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:30.24526884 +0000 UTC m=+163.050179528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.778442 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/accaeefe-6bc4-4802-a2c1-f356d6c57222-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"accaeefe-6bc4-4802-a2c1-f356d6c57222\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.787724 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.846049 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:29 crc kubenswrapper[4745]: E0127 12:14:29.846254 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:30.346226762 +0000 UTC m=+163.151137440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.846419 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:29 crc kubenswrapper[4745]: E0127 12:14:29.846759 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:30.346741467 +0000 UTC m=+163.151652155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.947965 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:29 crc kubenswrapper[4745]: E0127 12:14:29.948381 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:30.448362368 +0000 UTC m=+163.253273056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:29 crc kubenswrapper[4745]: I0127 12:14:29.979696 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.049109 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:30 crc kubenswrapper[4745]: E0127 12:14:30.049543 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:30.549526016 +0000 UTC m=+163.354436694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.149975 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:30 crc kubenswrapper[4745]: E0127 12:14:30.150326 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:30.650307933 +0000 UTC m=+163.455218621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.251600 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:30 crc kubenswrapper[4745]: E0127 12:14:30.251939 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:30.751908473 +0000 UTC m=+163.556819151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.353227 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:30 crc kubenswrapper[4745]: E0127 12:14:30.353402 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:30.85337206 +0000 UTC m=+163.658282758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.353549 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:30 crc kubenswrapper[4745]: E0127 12:14:30.353902 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:30.853895345 +0000 UTC m=+163.658806023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.407202 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"accaeefe-6bc4-4802-a2c1-f356d6c57222","Type":"ContainerStarted","Data":"9cc0415db17b0d9352138f2d5a83999ba839075fccdc3867b6b5df092be30047"} Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.408090 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5mbc7" Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.408151 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-lwzbq" Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.422188 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:30 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:30 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:30 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.422300 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.443322 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5mbc7" podStartSLOduration=139.443301565 podStartE2EDuration="2m19.443301565s" podCreationTimestamp="2026-01-27 12:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:30.436679464 +0000 UTC m=+163.241590162" watchObservedRunningTime="2026-01-27 12:14:30.443301565 +0000 UTC m=+163.248212253" Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.454662 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:30 crc kubenswrapper[4745]: E0127 12:14:30.456300 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:30.956272358 +0000 UTC m=+163.761183066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.459289 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-lwzbq" podStartSLOduration=16.459269094 podStartE2EDuration="16.459269094s" podCreationTimestamp="2026-01-27 12:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:30.45669784 +0000 UTC m=+163.261608538" watchObservedRunningTime="2026-01-27 12:14:30.459269094 +0000 UTC m=+163.264179802" Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.471322 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44s8w" podStartSLOduration=140.47130526 podStartE2EDuration="2m20.47130526s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:30.470952629 +0000 UTC m=+163.275863337" watchObservedRunningTime="2026-01-27 12:14:30.47130526 +0000 UTC m=+163.276215948" Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.557956 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:30 crc kubenswrapper[4745]: E0127 12:14:30.560219 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:31.060195955 +0000 UTC m=+163.865106633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.659290 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:30 crc kubenswrapper[4745]: E0127 12:14:30.659468 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:31.159440657 +0000 UTC m=+163.964351345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.660012 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:30 crc kubenswrapper[4745]: E0127 12:14:30.660447 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:31.160431176 +0000 UTC m=+163.965341864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.760880 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:30 crc kubenswrapper[4745]: E0127 12:14:30.761146 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:31.26111807 +0000 UTC m=+164.066028758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.761263 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:30 crc kubenswrapper[4745]: E0127 12:14:30.761635 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:31.261626415 +0000 UTC m=+164.066537103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.862588 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:30 crc kubenswrapper[4745]: E0127 12:14:30.862977 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:31.362922446 +0000 UTC m=+164.167833134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.863061 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:30 crc kubenswrapper[4745]: E0127 12:14:30.863641 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:31.363627346 +0000 UTC m=+164.168538034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.964355 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:30 crc kubenswrapper[4745]: E0127 12:14:30.964648 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:31.464597038 +0000 UTC m=+164.269507726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:30 crc kubenswrapper[4745]: I0127 12:14:30.964904 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:30 crc kubenswrapper[4745]: E0127 12:14:30.965448 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:31.465437392 +0000 UTC m=+164.270348080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.066754 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:31 crc kubenswrapper[4745]: E0127 12:14:31.067056 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:31.567014082 +0000 UTC m=+164.371924770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.067428 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:31 crc kubenswrapper[4745]: E0127 12:14:31.068029 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:31.56800105 +0000 UTC m=+164.372911778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.165333 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gkffs" podStartSLOduration=141.165314427 podStartE2EDuration="2m21.165314427s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:30.497976626 +0000 UTC m=+163.302887314" watchObservedRunningTime="2026-01-27 12:14:31.165314427 +0000 UTC m=+163.970225115" Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.168381 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.168493 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 12:14:31 crc kubenswrapper[4745]: E0127 12:14:31.168604 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:31.6685627 +0000 UTC m=+164.473473428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.168707 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.169247 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 12:14:31 crc kubenswrapper[4745]: E0127 12:14:31.169254 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:31.669220469 +0000 UTC m=+164.474131237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.175481 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.175498 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.217764 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.269832 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:31 crc kubenswrapper[4745]: E0127 12:14:31.270025 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:31.769998716 +0000 UTC m=+164.574909394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.270659 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.270734 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5d9f268-24c4-4bd8-9de3-84d3eeab64c2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d5d9f268-24c4-4bd8-9de3-84d3eeab64c2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 12:14:31 crc kubenswrapper[4745]: E0127 12:14:31.271023 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:31.771015045 +0000 UTC m=+164.575925733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.271062 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5d9f268-24c4-4bd8-9de3-84d3eeab64c2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d5d9f268-24c4-4bd8-9de3-84d3eeab64c2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.291859 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.291932 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.292770 4745 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-qt59f container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.21:8443/livez\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.292853 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" podUID="78fe56b7-5ff3-4540-bfda-efeef43859f6" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.21:8443/livez\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.372602 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:31 crc kubenswrapper[4745]: E0127 12:14:31.372754 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:31.872733579 +0000 UTC m=+164.677644267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.372876 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.372916 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5d9f268-24c4-4bd8-9de3-84d3eeab64c2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d5d9f268-24c4-4bd8-9de3-84d3eeab64c2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.372944 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5d9f268-24c4-4bd8-9de3-84d3eeab64c2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d5d9f268-24c4-4bd8-9de3-84d3eeab64c2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.373024 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5d9f268-24c4-4bd8-9de3-84d3eeab64c2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d5d9f268-24c4-4bd8-9de3-84d3eeab64c2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 12:14:31 crc kubenswrapper[4745]: E0127 12:14:31.373189 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:31.873180552 +0000 UTC m=+164.678091240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.403117 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5d9f268-24c4-4bd8-9de3-84d3eeab64c2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d5d9f268-24c4-4bd8-9de3-84d3eeab64c2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.420321 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:31 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:31 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:31 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.420391 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.474641 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:31 crc kubenswrapper[4745]: E0127 12:14:31.474922 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:31.974867215 +0000 UTC m=+164.779777903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.475387 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:31 crc kubenswrapper[4745]: E0127 12:14:31.476697 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:31.976675877 +0000 UTC m=+164.781586565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.493477 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.511951 4745 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-l2frr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.512041 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" podUID="65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.512585 4745 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-l2frr container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.512611 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" podUID="65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.577310 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:31 crc kubenswrapper[4745]: E0127 12:14:31.577566 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:32.077525355 +0000 UTC m=+164.882436063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.578153 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:31 crc kubenswrapper[4745]: E0127 12:14:31.579194 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:32.079176643 +0000 UTC m=+164.884087351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.679474 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:31 crc kubenswrapper[4745]: E0127 12:14:31.679865 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:32.179843316 +0000 UTC m=+164.984754004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.781164 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:31 crc kubenswrapper[4745]: E0127 12:14:31.781532 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:32.281515659 +0000 UTC m=+165.086426347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.882936 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:31 crc kubenswrapper[4745]: E0127 12:14:31.883275 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:32.383257843 +0000 UTC m=+165.188168531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:31 crc kubenswrapper[4745]: I0127 12:14:31.984200 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:31 crc kubenswrapper[4745]: E0127 12:14:31.984548 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:32.484533715 +0000 UTC m=+165.289444403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.018774 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 12:14:32 crc kubenswrapper[4745]: W0127 12:14:32.072001 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd5d9f268_24c4_4bd8_9de3_84d3eeab64c2.slice/crio-353895853b9cbd116411328eb7e0fc0e3a251ec51868a222d18b8fa69faf9b6d WatchSource:0}: Error finding container 353895853b9cbd116411328eb7e0fc0e3a251ec51868a222d18b8fa69faf9b6d: Status 404 returned error can't find the container with id 353895853b9cbd116411328eb7e0fc0e3a251ec51868a222d18b8fa69faf9b6d Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.085149 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:32 crc kubenswrapper[4745]: E0127 12:14:32.085272 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:32.585249869 +0000 UTC m=+165.390160567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.086149 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:32 crc kubenswrapper[4745]: E0127 12:14:32.086470 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:32.586460074 +0000 UTC m=+165.391370752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.188332 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:32 crc kubenswrapper[4745]: E0127 12:14:32.188388 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:32.688364323 +0000 UTC m=+165.493275011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.188772 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:32 crc kubenswrapper[4745]: E0127 12:14:32.189076 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:32.689068444 +0000 UTC m=+165.493979132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.289985 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:32 crc kubenswrapper[4745]: E0127 12:14:32.290206 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:32.79017965 +0000 UTC m=+165.595090378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.290804 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:32 crc kubenswrapper[4745]: E0127 12:14:32.291329 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:32.791310862 +0000 UTC m=+165.596221590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.392875 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:32 crc kubenswrapper[4745]: E0127 12:14:32.393271 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:32.893217241 +0000 UTC m=+165.698127929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.393503 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:32 crc kubenswrapper[4745]: E0127 12:14:32.393876 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:32.89386696 +0000 UTC m=+165.698777638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.422160 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:32 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:32 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:32 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.422249 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.423612 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d5d9f268-24c4-4bd8-9de3-84d3eeab64c2","Type":"ContainerStarted","Data":"353895853b9cbd116411328eb7e0fc0e3a251ec51868a222d18b8fa69faf9b6d"} Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.425988 4745 generic.go:334] "Generic (PLEG): container finished" podID="6086ad74-5d02-4181-bb34-8c116409de42" containerID="60ca90d8c87883fb7d78bb2a23252d2963ddce1f3954fa70ad400f9d46849a47" exitCode=0 Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.426774 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm" event={"ID":"6086ad74-5d02-4181-bb34-8c116409de42","Type":"ContainerDied","Data":"60ca90d8c87883fb7d78bb2a23252d2963ddce1f3954fa70ad400f9d46849a47"} Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.456840 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-tf24j" podStartSLOduration=142.456802349 podStartE2EDuration="2m22.456802349s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:32.453421562 +0000 UTC m=+165.258332250" watchObservedRunningTime="2026-01-27 12:14:32.456802349 +0000 UTC m=+165.261713057" Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.494608 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:32 crc kubenswrapper[4745]: E0127 12:14:32.494790 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:32.9947561 +0000 UTC m=+165.799666788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.494889 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:32 crc kubenswrapper[4745]: E0127 12:14:32.495261 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:32.995248224 +0000 UTC m=+165.800158912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.496721 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-scl78" podStartSLOduration=142.496709926 podStartE2EDuration="2m22.496709926s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:32.496382577 +0000 UTC m=+165.301293295" watchObservedRunningTime="2026-01-27 12:14:32.496709926 +0000 UTC m=+165.301620614" Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.595513 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:32 crc kubenswrapper[4745]: E0127 12:14:32.595720 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:33.095689792 +0000 UTC m=+165.900600500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.595901 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:32 crc kubenswrapper[4745]: E0127 12:14:32.596240 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:33.096229057 +0000 UTC m=+165.901139765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.698232 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:32 crc kubenswrapper[4745]: E0127 12:14:32.698582 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:33.198562568 +0000 UTC m=+166.003473256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.799943 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:32 crc kubenswrapper[4745]: E0127 12:14:32.800314 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:33.300300933 +0000 UTC m=+166.105211621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.901799 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:32 crc kubenswrapper[4745]: E0127 12:14:32.901933 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:33.401909033 +0000 UTC m=+166.206819721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:32 crc kubenswrapper[4745]: I0127 12:14:32.902182 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:32 crc kubenswrapper[4745]: E0127 12:14:32.902458 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:33.402443909 +0000 UTC m=+166.207354597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:33 crc kubenswrapper[4745]: I0127 12:14:33.003599 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:33 crc kubenswrapper[4745]: E0127 12:14:33.003777 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:33.50375036 +0000 UTC m=+166.308661048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:33 crc kubenswrapper[4745]: I0127 12:14:33.004138 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:33 crc kubenswrapper[4745]: E0127 12:14:33.004638 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:33.504614795 +0000 UTC m=+166.309525483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:33 crc kubenswrapper[4745]: I0127 12:14:33.104932 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:33 crc kubenswrapper[4745]: E0127 12:14:33.105124 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:33.605100104 +0000 UTC m=+166.410010792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:33 crc kubenswrapper[4745]: I0127 12:14:33.105270 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:33 crc kubenswrapper[4745]: E0127 12:14:33.105615 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:33.605600748 +0000 UTC m=+166.410511506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:33 crc kubenswrapper[4745]: I0127 12:14:33.206199 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:33 crc kubenswrapper[4745]: E0127 12:14:33.206418 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:33.706381555 +0000 UTC m=+166.511292243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:33 crc kubenswrapper[4745]: I0127 12:14:33.206680 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:33 crc kubenswrapper[4745]: E0127 12:14:33.207064 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:33.707053744 +0000 UTC m=+166.511964622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:33 crc kubenswrapper[4745]: I0127 12:14:33.307898 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:33 crc kubenswrapper[4745]: E0127 12:14:33.308121 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:33.808093358 +0000 UTC m=+166.613004046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:33 crc kubenswrapper[4745]: I0127 12:14:33.308177 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:33 crc kubenswrapper[4745]: E0127 12:14:33.308493 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:33.80848583 +0000 UTC m=+166.613396608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:33 crc kubenswrapper[4745]: I0127 12:14:33.409068 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:33 crc kubenswrapper[4745]: E0127 12:14:33.409399 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:33.90938004 +0000 UTC m=+166.714290728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:33 crc kubenswrapper[4745]: I0127 12:14:33.421328 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:33 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:33 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:33 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:33 crc kubenswrapper[4745]: I0127 12:14:33.421866 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:33 crc kubenswrapper[4745]: I0127 12:14:33.510117 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:33 crc kubenswrapper[4745]: I0127 12:14:33.510215 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs\") pod \"network-metrics-daemon-swntl\" (UID: \"c1811fa8-9015-4fe0-8fad-2461d64cdffd\") " pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:14:33 crc kubenswrapper[4745]: E0127 12:14:33.510496 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:34.010481696 +0000 UTC m=+166.815392384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:33 crc kubenswrapper[4745]: I0127 12:14:33.607362 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1811fa8-9015-4fe0-8fad-2461d64cdffd-metrics-certs\") pod \"network-metrics-daemon-swntl\" (UID: \"c1811fa8-9015-4fe0-8fad-2461d64cdffd\") " pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:14:33 crc kubenswrapper[4745]: I0127 12:14:33.612363 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:33 crc kubenswrapper[4745]: E0127 12:14:33.614187 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:34.114130335 +0000 UTC m=+166.919041033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:33 crc kubenswrapper[4745]: I0127 12:14:33.699036 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-swntl" Jan 27 12:14:33 crc kubenswrapper[4745]: I0127 12:14:33.714116 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:33 crc kubenswrapper[4745]: E0127 12:14:33.714430 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:34.214414878 +0000 UTC m=+167.019325566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:33 crc kubenswrapper[4745]: I0127 12:14:33.815789 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:33 crc kubenswrapper[4745]: E0127 12:14:33.816237 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:34.316214504 +0000 UTC m=+167.121125192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:33 crc kubenswrapper[4745]: I0127 12:14:33.828409 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm" Jan 27 12:14:33 crc kubenswrapper[4745]: I0127 12:14:33.917258 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:33 crc kubenswrapper[4745]: E0127 12:14:33.917720 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:34.417564897 +0000 UTC m=+167.222475585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:33 crc kubenswrapper[4745]: I0127 12:14:33.963779 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-swntl"] Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.017850 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:34 crc kubenswrapper[4745]: E0127 12:14:34.017977 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:34.517955532 +0000 UTC m=+167.322866220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.018216 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6086ad74-5d02-4181-bb34-8c116409de42-config-volume\") pod \"6086ad74-5d02-4181-bb34-8c116409de42\" (UID: \"6086ad74-5d02-4181-bb34-8c116409de42\") " Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.018247 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6086ad74-5d02-4181-bb34-8c116409de42-secret-volume\") pod \"6086ad74-5d02-4181-bb34-8c116409de42\" (UID: \"6086ad74-5d02-4181-bb34-8c116409de42\") " Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.018374 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kgsk\" (UniqueName: \"kubernetes.io/projected/6086ad74-5d02-4181-bb34-8c116409de42-kube-api-access-6kgsk\") pod \"6086ad74-5d02-4181-bb34-8c116409de42\" (UID: \"6086ad74-5d02-4181-bb34-8c116409de42\") " Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.018623 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.018964 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6086ad74-5d02-4181-bb34-8c116409de42-config-volume" (OuterVolumeSpecName: "config-volume") pod "6086ad74-5d02-4181-bb34-8c116409de42" (UID: "6086ad74-5d02-4181-bb34-8c116409de42"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:14:34 crc kubenswrapper[4745]: E0127 12:14:34.018997 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:34.518985042 +0000 UTC m=+167.323895730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.028919 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6086ad74-5d02-4181-bb34-8c116409de42-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6086ad74-5d02-4181-bb34-8c116409de42" (UID: "6086ad74-5d02-4181-bb34-8c116409de42"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.032642 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6086ad74-5d02-4181-bb34-8c116409de42-kube-api-access-6kgsk" (OuterVolumeSpecName: "kube-api-access-6kgsk") pod "6086ad74-5d02-4181-bb34-8c116409de42" (UID: "6086ad74-5d02-4181-bb34-8c116409de42"). InnerVolumeSpecName "kube-api-access-6kgsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.120334 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:34 crc kubenswrapper[4745]: E0127 12:14:34.120532 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:34.62050584 +0000 UTC m=+167.425416528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.120702 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.120797 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kgsk\" (UniqueName: \"kubernetes.io/projected/6086ad74-5d02-4181-bb34-8c116409de42-kube-api-access-6kgsk\") on node \"crc\" DevicePath \"\"" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.120835 4745 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6086ad74-5d02-4181-bb34-8c116409de42-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.120848 4745 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6086ad74-5d02-4181-bb34-8c116409de42-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 12:14:34 crc kubenswrapper[4745]: E0127 12:14:34.121218 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:34.62119469 +0000 UTC m=+167.426105378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.221404 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:34 crc kubenswrapper[4745]: E0127 12:14:34.221553 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:34.721530994 +0000 UTC m=+167.526441682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.221837 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:34 crc kubenswrapper[4745]: E0127 12:14:34.222123 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:34.722115121 +0000 UTC m=+167.527025809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.322435 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:34 crc kubenswrapper[4745]: E0127 12:14:34.322704 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:34.822656531 +0000 UTC m=+167.627567219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.421189 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:34 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:34 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:34 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.421268 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.423257 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:34 crc kubenswrapper[4745]: E0127 12:14:34.423890 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:34.92386141 +0000 UTC m=+167.728772098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.439988 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"accaeefe-6bc4-4802-a2c1-f356d6c57222","Type":"ContainerStarted","Data":"4ed2b8694c4476ae13b83ceb406c7d4089b41b2f36f3db355c6854cf2b82ffbf"} Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.442023 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-swntl" event={"ID":"c1811fa8-9015-4fe0-8fad-2461d64cdffd","Type":"ContainerStarted","Data":"87ab3cc635eea7ee432b2a95811fc7420c1c4638e2c01c98a4a99cdeb03a2ebb"} Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.444955 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm" event={"ID":"6086ad74-5d02-4181-bb34-8c116409de42","Type":"ContainerDied","Data":"fb2b764cfa5cc44865312b24adb2d84b4224a6eb7bdf164eea11dbc2b8419743"} Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.444996 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb2b764cfa5cc44865312b24adb2d84b4224a6eb7bdf164eea11dbc2b8419743" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.445013 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.512015 4745 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-l2frr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.512081 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" podUID="65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.512532 4745 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-l2frr container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.512562 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" podUID="65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.512617 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.513312 4745 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-l2frr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.513366 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" podUID="65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.513269 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"51721204e711d30714f91c160861fcbe9e1f7a2cd59dc67afd2147f0d9d2efab"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.513516 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" podUID="65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2" containerName="openshift-config-operator" containerID="cri-o://51721204e711d30714f91c160861fcbe9e1f7a2cd59dc67afd2147f0d9d2efab" gracePeriod=30 Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.524221 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:34 crc kubenswrapper[4745]: E0127 12:14:34.524380 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:35.024325286 +0000 UTC m=+167.829235974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.524683 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:34 crc kubenswrapper[4745]: E0127 12:14:34.525224 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:35.025189901 +0000 UTC m=+167.830100589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.590955 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bmx2n"] Jan 27 12:14:34 crc kubenswrapper[4745]: E0127 12:14:34.591226 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6086ad74-5d02-4181-bb34-8c116409de42" containerName="collect-profiles" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.591246 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="6086ad74-5d02-4181-bb34-8c116409de42" containerName="collect-profiles" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.591840 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="6086ad74-5d02-4181-bb34-8c116409de42" containerName="collect-profiles" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.601369 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmx2n" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.604929 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.607990 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bmx2n"] Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.625453 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:34 crc kubenswrapper[4745]: E0127 12:14:34.626005 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:35.125981308 +0000 UTC m=+167.930891996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.626053 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:34 crc kubenswrapper[4745]: E0127 12:14:34.626380 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:35.126374 +0000 UTC m=+167.931284688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:34 crc kubenswrapper[4745]: E0127 12:14:34.728177 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:35.228143495 +0000 UTC m=+168.033054193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.727950 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.728603 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c6f4dda-1294-4903-a4c1-6685307c3b25-utilities\") pod \"certified-operators-bmx2n\" (UID: \"7c6f4dda-1294-4903-a4c1-6685307c3b25\") " pod="openshift-marketplace/certified-operators-bmx2n" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.728695 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxf9r\" (UniqueName: \"kubernetes.io/projected/7c6f4dda-1294-4903-a4c1-6685307c3b25-kube-api-access-pxf9r\") pod \"certified-operators-bmx2n\" (UID: \"7c6f4dda-1294-4903-a4c1-6685307c3b25\") " pod="openshift-marketplace/certified-operators-bmx2n" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.728841 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c6f4dda-1294-4903-a4c1-6685307c3b25-catalog-content\") pod \"certified-operators-bmx2n\" (UID: \"7c6f4dda-1294-4903-a4c1-6685307c3b25\") " pod="openshift-marketplace/certified-operators-bmx2n" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.728942 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:34 crc kubenswrapper[4745]: E0127 12:14:34.729367 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:35.22935617 +0000 UTC m=+168.034266868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.780936 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tzw6b"] Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.781800 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tzw6b" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.785721 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.801231 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tzw6b"] Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.830752 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.831256 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c6f4dda-1294-4903-a4c1-6685307c3b25-utilities\") pod \"certified-operators-bmx2n\" (UID: \"7c6f4dda-1294-4903-a4c1-6685307c3b25\") " pod="openshift-marketplace/certified-operators-bmx2n" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.831365 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxf9r\" (UniqueName: \"kubernetes.io/projected/7c6f4dda-1294-4903-a4c1-6685307c3b25-kube-api-access-pxf9r\") pod \"certified-operators-bmx2n\" (UID: \"7c6f4dda-1294-4903-a4c1-6685307c3b25\") " pod="openshift-marketplace/certified-operators-bmx2n" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.831544 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c6f4dda-1294-4903-a4c1-6685307c3b25-catalog-content\") pod \"certified-operators-bmx2n\" (UID: \"7c6f4dda-1294-4903-a4c1-6685307c3b25\") " pod="openshift-marketplace/certified-operators-bmx2n" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.831677 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/341a8942-834f-4f76-8269-7ecdecaaa1b0-utilities\") pod \"community-operators-tzw6b\" (UID: \"341a8942-834f-4f76-8269-7ecdecaaa1b0\") " pod="openshift-marketplace/community-operators-tzw6b" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.831770 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccqcz\" (UniqueName: \"kubernetes.io/projected/341a8942-834f-4f76-8269-7ecdecaaa1b0-kube-api-access-ccqcz\") pod \"community-operators-tzw6b\" (UID: \"341a8942-834f-4f76-8269-7ecdecaaa1b0\") " pod="openshift-marketplace/community-operators-tzw6b" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.831885 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/341a8942-834f-4f76-8269-7ecdecaaa1b0-catalog-content\") pod \"community-operators-tzw6b\" (UID: \"341a8942-834f-4f76-8269-7ecdecaaa1b0\") " pod="openshift-marketplace/community-operators-tzw6b" Jan 27 12:14:34 crc kubenswrapper[4745]: E0127 12:14:34.832289 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:35.332274938 +0000 UTC m=+168.137185626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.832588 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c6f4dda-1294-4903-a4c1-6685307c3b25-utilities\") pod \"certified-operators-bmx2n\" (UID: \"7c6f4dda-1294-4903-a4c1-6685307c3b25\") " pod="openshift-marketplace/certified-operators-bmx2n" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.832757 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c6f4dda-1294-4903-a4c1-6685307c3b25-catalog-content\") pod \"certified-operators-bmx2n\" (UID: \"7c6f4dda-1294-4903-a4c1-6685307c3b25\") " pod="openshift-marketplace/certified-operators-bmx2n" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.851499 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxf9r\" (UniqueName: \"kubernetes.io/projected/7c6f4dda-1294-4903-a4c1-6685307c3b25-kube-api-access-pxf9r\") pod \"certified-operators-bmx2n\" (UID: \"7c6f4dda-1294-4903-a4c1-6685307c3b25\") " pod="openshift-marketplace/certified-operators-bmx2n" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.921024 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmx2n" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.933161 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.933198 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/341a8942-834f-4f76-8269-7ecdecaaa1b0-utilities\") pod \"community-operators-tzw6b\" (UID: \"341a8942-834f-4f76-8269-7ecdecaaa1b0\") " pod="openshift-marketplace/community-operators-tzw6b" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.933219 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccqcz\" (UniqueName: \"kubernetes.io/projected/341a8942-834f-4f76-8269-7ecdecaaa1b0-kube-api-access-ccqcz\") pod \"community-operators-tzw6b\" (UID: \"341a8942-834f-4f76-8269-7ecdecaaa1b0\") " pod="openshift-marketplace/community-operators-tzw6b" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.933236 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/341a8942-834f-4f76-8269-7ecdecaaa1b0-catalog-content\") pod \"community-operators-tzw6b\" (UID: \"341a8942-834f-4f76-8269-7ecdecaaa1b0\") " pod="openshift-marketplace/community-operators-tzw6b" Jan 27 12:14:34 crc kubenswrapper[4745]: E0127 12:14:34.933467 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:35.433454526 +0000 UTC m=+168.238365214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.933578 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/341a8942-834f-4f76-8269-7ecdecaaa1b0-catalog-content\") pod \"community-operators-tzw6b\" (UID: \"341a8942-834f-4f76-8269-7ecdecaaa1b0\") " pod="openshift-marketplace/community-operators-tzw6b" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.933627 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/341a8942-834f-4f76-8269-7ecdecaaa1b0-utilities\") pod \"community-operators-tzw6b\" (UID: \"341a8942-834f-4f76-8269-7ecdecaaa1b0\") " pod="openshift-marketplace/community-operators-tzw6b" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.953650 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccqcz\" (UniqueName: \"kubernetes.io/projected/341a8942-834f-4f76-8269-7ecdecaaa1b0-kube-api-access-ccqcz\") pod \"community-operators-tzw6b\" (UID: \"341a8942-834f-4f76-8269-7ecdecaaa1b0\") " pod="openshift-marketplace/community-operators-tzw6b" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.966228 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-255wr"] Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.967223 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-255wr" Jan 27 12:14:34 crc kubenswrapper[4745]: I0127 12:14:34.979676 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-255wr"] Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.034922 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:35 crc kubenswrapper[4745]: E0127 12:14:35.035120 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:35.535091018 +0000 UTC m=+168.340001706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.035837 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.035996 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnv2n\" (UniqueName: \"kubernetes.io/projected/64c43381-42e2-4e01-9559-70c3c56070ea-kube-api-access-lnv2n\") pod \"certified-operators-255wr\" (UID: \"64c43381-42e2-4e01-9559-70c3c56070ea\") " pod="openshift-marketplace/certified-operators-255wr" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.036139 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64c43381-42e2-4e01-9559-70c3c56070ea-catalog-content\") pod \"certified-operators-255wr\" (UID: \"64c43381-42e2-4e01-9559-70c3c56070ea\") " pod="openshift-marketplace/certified-operators-255wr" Jan 27 12:14:35 crc kubenswrapper[4745]: E0127 12:14:35.036242 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:35.53623183 +0000 UTC m=+168.341142518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.036393 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64c43381-42e2-4e01-9559-70c3c56070ea-utilities\") pod \"certified-operators-255wr\" (UID: \"64c43381-42e2-4e01-9559-70c3c56070ea\") " pod="openshift-marketplace/certified-operators-255wr" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.095662 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tzw6b" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.137790 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.138057 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnv2n\" (UniqueName: \"kubernetes.io/projected/64c43381-42e2-4e01-9559-70c3c56070ea-kube-api-access-lnv2n\") pod \"certified-operators-255wr\" (UID: \"64c43381-42e2-4e01-9559-70c3c56070ea\") " pod="openshift-marketplace/certified-operators-255wr" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.138107 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64c43381-42e2-4e01-9559-70c3c56070ea-catalog-content\") pod \"certified-operators-255wr\" (UID: \"64c43381-42e2-4e01-9559-70c3c56070ea\") " pod="openshift-marketplace/certified-operators-255wr" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.138140 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64c43381-42e2-4e01-9559-70c3c56070ea-utilities\") pod \"certified-operators-255wr\" (UID: \"64c43381-42e2-4e01-9559-70c3c56070ea\") " pod="openshift-marketplace/certified-operators-255wr" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.138650 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64c43381-42e2-4e01-9559-70c3c56070ea-utilities\") pod \"certified-operators-255wr\" (UID: \"64c43381-42e2-4e01-9559-70c3c56070ea\") " pod="openshift-marketplace/certified-operators-255wr" Jan 27 12:14:35 crc kubenswrapper[4745]: E0127 12:14:35.138749 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:35.638729857 +0000 UTC m=+168.443640545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.139307 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64c43381-42e2-4e01-9559-70c3c56070ea-catalog-content\") pod \"certified-operators-255wr\" (UID: \"64c43381-42e2-4e01-9559-70c3c56070ea\") " pod="openshift-marketplace/certified-operators-255wr" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.162456 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnv2n\" (UniqueName: \"kubernetes.io/projected/64c43381-42e2-4e01-9559-70c3c56070ea-kube-api-access-lnv2n\") pod \"certified-operators-255wr\" (UID: \"64c43381-42e2-4e01-9559-70c3c56070ea\") " pod="openshift-marketplace/certified-operators-255wr" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.166014 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zhnbq"] Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.167156 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhnbq" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.178113 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zhnbq"] Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.239775 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9vlg\" (UniqueName: \"kubernetes.io/projected/d2b41701-5113-4970-8d93-157bf16b3c06-kube-api-access-t9vlg\") pod \"community-operators-zhnbq\" (UID: \"d2b41701-5113-4970-8d93-157bf16b3c06\") " pod="openshift-marketplace/community-operators-zhnbq" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.239867 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2b41701-5113-4970-8d93-157bf16b3c06-utilities\") pod \"community-operators-zhnbq\" (UID: \"d2b41701-5113-4970-8d93-157bf16b3c06\") " pod="openshift-marketplace/community-operators-zhnbq" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.239911 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2b41701-5113-4970-8d93-157bf16b3c06-catalog-content\") pod \"community-operators-zhnbq\" (UID: \"d2b41701-5113-4970-8d93-157bf16b3c06\") " pod="openshift-marketplace/community-operators-zhnbq" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.239942 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:35 crc kubenswrapper[4745]: E0127 12:14:35.240280 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:35.740260625 +0000 UTC m=+168.545171313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.296341 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-255wr" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.341598 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.341855 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9vlg\" (UniqueName: \"kubernetes.io/projected/d2b41701-5113-4970-8d93-157bf16b3c06-kube-api-access-t9vlg\") pod \"community-operators-zhnbq\" (UID: \"d2b41701-5113-4970-8d93-157bf16b3c06\") " pod="openshift-marketplace/community-operators-zhnbq" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.341886 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2b41701-5113-4970-8d93-157bf16b3c06-utilities\") pod \"community-operators-zhnbq\" (UID: \"d2b41701-5113-4970-8d93-157bf16b3c06\") " pod="openshift-marketplace/community-operators-zhnbq" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.341908 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2b41701-5113-4970-8d93-157bf16b3c06-catalog-content\") pod \"community-operators-zhnbq\" (UID: \"d2b41701-5113-4970-8d93-157bf16b3c06\") " pod="openshift-marketplace/community-operators-zhnbq" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.342333 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2b41701-5113-4970-8d93-157bf16b3c06-catalog-content\") pod \"community-operators-zhnbq\" (UID: \"d2b41701-5113-4970-8d93-157bf16b3c06\") " pod="openshift-marketplace/community-operators-zhnbq" Jan 27 12:14:35 crc kubenswrapper[4745]: E0127 12:14:35.342333 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:35.842315908 +0000 UTC m=+168.647226596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.342882 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2b41701-5113-4970-8d93-157bf16b3c06-utilities\") pod \"community-operators-zhnbq\" (UID: \"d2b41701-5113-4970-8d93-157bf16b3c06\") " pod="openshift-marketplace/community-operators-zhnbq" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.360192 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9vlg\" (UniqueName: \"kubernetes.io/projected/d2b41701-5113-4970-8d93-157bf16b3c06-kube-api-access-t9vlg\") pod \"community-operators-zhnbq\" (UID: \"d2b41701-5113-4970-8d93-157bf16b3c06\") " pod="openshift-marketplace/community-operators-zhnbq" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.377177 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tzw6b"] Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.399562 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bmx2n"] Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.423218 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:35 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:35 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:35 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.423274 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:35 crc kubenswrapper[4745]: W0127 12:14:35.430092 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c6f4dda_1294_4903_a4c1_6685307c3b25.slice/crio-1d4ef67a7790b75f979c0229c7f83f6f6bae2efd5df34c4912edc42e58db1048 WatchSource:0}: Error finding container 1d4ef67a7790b75f979c0229c7f83f6f6bae2efd5df34c4912edc42e58db1048: Status 404 returned error can't find the container with id 1d4ef67a7790b75f979c0229c7f83f6f6bae2efd5df34c4912edc42e58db1048 Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.443366 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:35 crc kubenswrapper[4745]: E0127 12:14:35.443708 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:35.943691112 +0000 UTC m=+168.748601800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.454906 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmx2n" event={"ID":"7c6f4dda-1294-4903-a4c1-6685307c3b25","Type":"ContainerStarted","Data":"1d4ef67a7790b75f979c0229c7f83f6f6bae2efd5df34c4912edc42e58db1048"} Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.459059 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d5d9f268-24c4-4bd8-9de3-84d3eeab64c2","Type":"ContainerStarted","Data":"22790f34bb80aa639f1bd0c72d9102b4c4080192d00055f7fcf9e7e805d3e32d"} Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.462553 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzw6b" event={"ID":"341a8942-834f-4f76-8269-7ecdecaaa1b0","Type":"ContainerStarted","Data":"c24c2497e7279fcfb476fddd3237415918ef081866244f3cd8fa278ee8f60478"} Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.479362 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=6.479346457 podStartE2EDuration="6.479346457s" podCreationTimestamp="2026-01-27 12:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:35.476770783 +0000 UTC m=+168.281681491" watchObservedRunningTime="2026-01-27 12:14:35.479346457 +0000 UTC m=+168.284257145" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.487301 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhnbq" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.543786 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:35 crc kubenswrapper[4745]: E0127 12:14:35.545281 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:36.045265592 +0000 UTC m=+168.850176280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.645379 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:35 crc kubenswrapper[4745]: E0127 12:14:35.645843 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:36.145826152 +0000 UTC m=+168.950736850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.703458 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-255wr"] Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.751831 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:35 crc kubenswrapper[4745]: E0127 12:14:35.752591 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:36.252570941 +0000 UTC m=+169.057481619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.773619 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zhnbq"] Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.813252 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-lwzbq" Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.854857 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:35 crc kubenswrapper[4745]: E0127 12:14:35.856535 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:36.356523129 +0000 UTC m=+169.161433817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.956057 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:35 crc kubenswrapper[4745]: E0127 12:14:35.956685 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:36.456666877 +0000 UTC m=+169.261577565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.967860 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:14:35 crc kubenswrapper[4745]: I0127 12:14:35.967951 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.058189 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:36 crc kubenswrapper[4745]: E0127 12:14:36.058591 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:36.558572946 +0000 UTC m=+169.363483644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.159482 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:36 crc kubenswrapper[4745]: E0127 12:14:36.160000 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:36.659981991 +0000 UTC m=+169.464892679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.160215 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:36 crc kubenswrapper[4745]: E0127 12:14:36.160705 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:36.660689772 +0000 UTC m=+169.465600460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.261058 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:36 crc kubenswrapper[4745]: E0127 12:14:36.261376 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:36.761356985 +0000 UTC m=+169.566267673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.363024 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:36 crc kubenswrapper[4745]: E0127 12:14:36.363424 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:36.863407698 +0000 UTC m=+169.668318386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.420638 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:36 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:36 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:36 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.420719 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.464190 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:36 crc kubenswrapper[4745]: E0127 12:14:36.464366 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:36.96434356 +0000 UTC m=+169.769254248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.468577 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-255wr" event={"ID":"64c43381-42e2-4e01-9559-70c3c56070ea","Type":"ContainerStarted","Data":"26ebc03ce5679d50a88b967ef99aa04bdd01f6b47a3944ab1a4e3907e0fe03a7"} Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.470226 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-swntl" event={"ID":"c1811fa8-9015-4fe0-8fad-2461d64cdffd","Type":"ContainerStarted","Data":"1ec7a3c221d157d85432776ea1a2ebfe7a583c59fcebaa05a887f308decd6cfb"} Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.472063 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhnbq" event={"ID":"d2b41701-5113-4970-8d93-157bf16b3c06","Type":"ContainerStarted","Data":"5bc90ef99a65d963df83582620079e3ec0a3650947f6c5ddf77d593a830d8946"} Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.473373 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-xhn4h" Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.521268 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.521335 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.521345 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.521432 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.565248 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:36 crc kubenswrapper[4745]: E0127 12:14:36.565602 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:37.06558767 +0000 UTC m=+169.870498358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.666488 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:36 crc kubenswrapper[4745]: E0127 12:14:36.666616 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:37.166584223 +0000 UTC m=+169.971494911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.666944 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:36 crc kubenswrapper[4745]: E0127 12:14:36.667461 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:37.167449208 +0000 UTC m=+169.972359906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.705613 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.705754 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.707104 4745 patch_prober.go:28] interesting pod/apiserver-76f77b778f-tf24j container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.707164 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-tf24j" podUID="cef45995-4242-499f-adeb-cc12aa630b5c" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.766016 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hw272"] Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.766988 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hw272" Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.767859 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:36 crc kubenswrapper[4745]: E0127 12:14:36.768015 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:37.267998248 +0000 UTC m=+170.072908936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.768108 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:36 crc kubenswrapper[4745]: E0127 12:14:36.768387 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:37.268379309 +0000 UTC m=+170.073289997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.769254 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.779175 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hw272"] Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.870676 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:36 crc kubenswrapper[4745]: E0127 12:14:36.871118 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:37.371092551 +0000 UTC m=+170.176003239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.871694 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.871792 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d114857-b077-4798-b578-b9a15645d31f-utilities\") pod \"redhat-marketplace-hw272\" (UID: \"6d114857-b077-4798-b578-b9a15645d31f\") " pod="openshift-marketplace/redhat-marketplace-hw272" Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.871890 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d114857-b077-4798-b578-b9a15645d31f-catalog-content\") pod \"redhat-marketplace-hw272\" (UID: \"6d114857-b077-4798-b578-b9a15645d31f\") " pod="openshift-marketplace/redhat-marketplace-hw272" Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.871926 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfrhw\" (UniqueName: \"kubernetes.io/projected/6d114857-b077-4798-b578-b9a15645d31f-kube-api-access-kfrhw\") pod \"redhat-marketplace-hw272\" (UID: \"6d114857-b077-4798-b578-b9a15645d31f\") " pod="openshift-marketplace/redhat-marketplace-hw272" Jan 27 12:14:36 crc kubenswrapper[4745]: E0127 12:14:36.873533 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:37.373516911 +0000 UTC m=+170.178427599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.972616 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.972788 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d114857-b077-4798-b578-b9a15645d31f-catalog-content\") pod \"redhat-marketplace-hw272\" (UID: \"6d114857-b077-4798-b578-b9a15645d31f\") " pod="openshift-marketplace/redhat-marketplace-hw272" Jan 27 12:14:36 crc kubenswrapper[4745]: E0127 12:14:36.972898 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:37.472849836 +0000 UTC m=+170.277760554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.973093 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfrhw\" (UniqueName: \"kubernetes.io/projected/6d114857-b077-4798-b578-b9a15645d31f-kube-api-access-kfrhw\") pod \"redhat-marketplace-hw272\" (UID: \"6d114857-b077-4798-b578-b9a15645d31f\") " pod="openshift-marketplace/redhat-marketplace-hw272" Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.973272 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d114857-b077-4798-b578-b9a15645d31f-catalog-content\") pod \"redhat-marketplace-hw272\" (UID: \"6d114857-b077-4798-b578-b9a15645d31f\") " pod="openshift-marketplace/redhat-marketplace-hw272" Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.973453 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.973553 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d114857-b077-4798-b578-b9a15645d31f-utilities\") pod \"redhat-marketplace-hw272\" (UID: \"6d114857-b077-4798-b578-b9a15645d31f\") " pod="openshift-marketplace/redhat-marketplace-hw272" Jan 27 12:14:36 crc kubenswrapper[4745]: E0127 12:14:36.973825 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:37.473791263 +0000 UTC m=+170.278702131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.974148 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d114857-b077-4798-b578-b9a15645d31f-utilities\") pod \"redhat-marketplace-hw272\" (UID: \"6d114857-b077-4798-b578-b9a15645d31f\") " pod="openshift-marketplace/redhat-marketplace-hw272" Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.993350 4745 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-ndbtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 27 12:14:36 crc kubenswrapper[4745]: I0127 12:14:36.993420 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" podUID="1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.005984 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfrhw\" (UniqueName: \"kubernetes.io/projected/6d114857-b077-4798-b578-b9a15645d31f-kube-api-access-kfrhw\") pod \"redhat-marketplace-hw272\" (UID: \"6d114857-b077-4798-b578-b9a15645d31f\") " pod="openshift-marketplace/redhat-marketplace-hw272" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.034849 4745 csr.go:261] certificate signing request csr-mxgn9 is approved, waiting to be issued Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.040891 4745 csr.go:257] certificate signing request csr-mxgn9 is issued Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.051330 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.066779 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qt59f" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.074574 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:37 crc kubenswrapper[4745]: E0127 12:14:37.074730 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:37.574705864 +0000 UTC m=+170.379616552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.074785 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:37 crc kubenswrapper[4745]: E0127 12:14:37.075171 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:37.575154887 +0000 UTC m=+170.380065645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.082620 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hw272" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.164157 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fzw5q"] Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.165464 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzw5q" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.178500 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.178756 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw9nh\" (UniqueName: \"kubernetes.io/projected/36154dea-ca68-4ca6-8e2f-83a669152ca7-kube-api-access-dw9nh\") pod \"redhat-marketplace-fzw5q\" (UID: \"36154dea-ca68-4ca6-8e2f-83a669152ca7\") " pod="openshift-marketplace/redhat-marketplace-fzw5q" Jan 27 12:14:37 crc kubenswrapper[4745]: E0127 12:14:37.178837 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:37.678801236 +0000 UTC m=+170.483711934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.178909 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.178967 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36154dea-ca68-4ca6-8e2f-83a669152ca7-utilities\") pod \"redhat-marketplace-fzw5q\" (UID: \"36154dea-ca68-4ca6-8e2f-83a669152ca7\") " pod="openshift-marketplace/redhat-marketplace-fzw5q" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.178992 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36154dea-ca68-4ca6-8e2f-83a669152ca7-catalog-content\") pod \"redhat-marketplace-fzw5q\" (UID: \"36154dea-ca68-4ca6-8e2f-83a669152ca7\") " pod="openshift-marketplace/redhat-marketplace-fzw5q" Jan 27 12:14:37 crc kubenswrapper[4745]: E0127 12:14:37.179430 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:37.679423534 +0000 UTC m=+170.484334222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.181854 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzw5q"] Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.279806 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.282050 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36154dea-ca68-4ca6-8e2f-83a669152ca7-utilities\") pod \"redhat-marketplace-fzw5q\" (UID: \"36154dea-ca68-4ca6-8e2f-83a669152ca7\") " pod="openshift-marketplace/redhat-marketplace-fzw5q" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.282125 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36154dea-ca68-4ca6-8e2f-83a669152ca7-catalog-content\") pod \"redhat-marketplace-fzw5q\" (UID: \"36154dea-ca68-4ca6-8e2f-83a669152ca7\") " pod="openshift-marketplace/redhat-marketplace-fzw5q" Jan 27 12:14:37 crc kubenswrapper[4745]: E0127 12:14:37.282170 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:37.782144236 +0000 UTC m=+170.587054934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.282207 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw9nh\" (UniqueName: \"kubernetes.io/projected/36154dea-ca68-4ca6-8e2f-83a669152ca7-kube-api-access-dw9nh\") pod \"redhat-marketplace-fzw5q\" (UID: \"36154dea-ca68-4ca6-8e2f-83a669152ca7\") " pod="openshift-marketplace/redhat-marketplace-fzw5q" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.282516 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36154dea-ca68-4ca6-8e2f-83a669152ca7-catalog-content\") pod \"redhat-marketplace-fzw5q\" (UID: \"36154dea-ca68-4ca6-8e2f-83a669152ca7\") " pod="openshift-marketplace/redhat-marketplace-fzw5q" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.282732 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36154dea-ca68-4ca6-8e2f-83a669152ca7-utilities\") pod \"redhat-marketplace-fzw5q\" (UID: \"36154dea-ca68-4ca6-8e2f-83a669152ca7\") " pod="openshift-marketplace/redhat-marketplace-fzw5q" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.299970 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw9nh\" (UniqueName: \"kubernetes.io/projected/36154dea-ca68-4ca6-8e2f-83a669152ca7-kube-api-access-dw9nh\") pod \"redhat-marketplace-fzw5q\" (UID: \"36154dea-ca68-4ca6-8e2f-83a669152ca7\") " pod="openshift-marketplace/redhat-marketplace-fzw5q" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.317020 4745 patch_prober.go:28] interesting pod/console-f9d7485db-zqrwf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.317087 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zqrwf" podUID="94eb6425-bdf2-43d1-926e-c94700a985be" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.320533 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7xjzm" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.328607 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hw272"] Jan 27 12:14:37 crc kubenswrapper[4745]: W0127 12:14:37.339610 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d114857_b077_4798_b578_b9a15645d31f.slice/crio-ab5b4953a8f2277dad071de4e8ebf65992d6f11b90522eb0ef4d900374cf36da WatchSource:0}: Error finding container ab5b4953a8f2277dad071de4e8ebf65992d6f11b90522eb0ef4d900374cf36da: Status 404 returned error can't find the container with id ab5b4953a8f2277dad071de4e8ebf65992d6f11b90522eb0ef4d900374cf36da Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.383797 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:37 crc kubenswrapper[4745]: E0127 12:14:37.385236 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:37.885221509 +0000 UTC m=+170.690132197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.429067 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:37 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:37 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:37 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.429139 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.452469 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.490253 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:37 crc kubenswrapper[4745]: E0127 12:14:37.491025 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:37.990994459 +0000 UTC m=+170.795905147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.491956 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzw5q" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.500232 4745 generic.go:334] "Generic (PLEG): container finished" podID="d5d9f268-24c4-4bd8-9de3-84d3eeab64c2" containerID="22790f34bb80aa639f1bd0c72d9102b4c4080192d00055f7fcf9e7e805d3e32d" exitCode=0 Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.500342 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d5d9f268-24c4-4bd8-9de3-84d3eeab64c2","Type":"ContainerDied","Data":"22790f34bb80aa639f1bd0c72d9102b4c4080192d00055f7fcf9e7e805d3e32d"} Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.502532 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hw272" event={"ID":"6d114857-b077-4798-b578-b9a15645d31f","Type":"ContainerStarted","Data":"ab5b4953a8f2277dad071de4e8ebf65992d6f11b90522eb0ef4d900374cf36da"} Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.504907 4745 generic.go:334] "Generic (PLEG): container finished" podID="accaeefe-6bc4-4802-a2c1-f356d6c57222" containerID="4ed2b8694c4476ae13b83ceb406c7d4089b41b2f36f3db355c6854cf2b82ffbf" exitCode=0 Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.505504 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"accaeefe-6bc4-4802-a2c1-f356d6c57222","Type":"ContainerDied","Data":"4ed2b8694c4476ae13b83ceb406c7d4089b41b2f36f3db355c6854cf2b82ffbf"} Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.511538 4745 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-l2frr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.511990 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" podUID="65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.592642 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:37 crc kubenswrapper[4745]: E0127 12:14:37.593767 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:38.093748663 +0000 UTC m=+170.898659411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.690093 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-skmp4" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.696604 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:37 crc kubenswrapper[4745]: E0127 12:14:37.696733 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:38.196716453 +0000 UTC m=+171.001627141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.696900 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:37 crc kubenswrapper[4745]: E0127 12:14:37.698714 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:38.19870625 +0000 UTC m=+171.003616938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.700027 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6wlnl" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.768916 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9tkgm"] Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.770013 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9tkgm" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.775211 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.784674 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9tkgm"] Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.800981 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.801158 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-224b6\" (UniqueName: \"kubernetes.io/projected/7ff89667-3b76-4571-a07b-d43bce0a2e5b-kube-api-access-224b6\") pod \"redhat-operators-9tkgm\" (UID: \"7ff89667-3b76-4571-a07b-d43bce0a2e5b\") " pod="openshift-marketplace/redhat-operators-9tkgm" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.801189 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff89667-3b76-4571-a07b-d43bce0a2e5b-utilities\") pod \"redhat-operators-9tkgm\" (UID: \"7ff89667-3b76-4571-a07b-d43bce0a2e5b\") " pod="openshift-marketplace/redhat-operators-9tkgm" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.801229 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff89667-3b76-4571-a07b-d43bce0a2e5b-catalog-content\") pod \"redhat-operators-9tkgm\" (UID: \"7ff89667-3b76-4571-a07b-d43bce0a2e5b\") " pod="openshift-marketplace/redhat-operators-9tkgm" Jan 27 12:14:37 crc kubenswrapper[4745]: E0127 12:14:37.802017 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:38.302004089 +0000 UTC m=+171.106914767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.875360 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzw5q"] Jan 27 12:14:37 crc kubenswrapper[4745]: W0127 12:14:37.884039 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36154dea_ca68_4ca6_8e2f_83a669152ca7.slice/crio-1c75497f7121f7674d8cec25a04a66b97409f90bb48a3fe99ff45e1cab5cc649 WatchSource:0}: Error finding container 1c75497f7121f7674d8cec25a04a66b97409f90bb48a3fe99ff45e1cab5cc649: Status 404 returned error can't find the container with id 1c75497f7121f7674d8cec25a04a66b97409f90bb48a3fe99ff45e1cab5cc649 Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.902662 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-224b6\" (UniqueName: \"kubernetes.io/projected/7ff89667-3b76-4571-a07b-d43bce0a2e5b-kube-api-access-224b6\") pod \"redhat-operators-9tkgm\" (UID: \"7ff89667-3b76-4571-a07b-d43bce0a2e5b\") " pod="openshift-marketplace/redhat-operators-9tkgm" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.902719 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff89667-3b76-4571-a07b-d43bce0a2e5b-utilities\") pod \"redhat-operators-9tkgm\" (UID: \"7ff89667-3b76-4571-a07b-d43bce0a2e5b\") " pod="openshift-marketplace/redhat-operators-9tkgm" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.902764 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff89667-3b76-4571-a07b-d43bce0a2e5b-catalog-content\") pod \"redhat-operators-9tkgm\" (UID: \"7ff89667-3b76-4571-a07b-d43bce0a2e5b\") " pod="openshift-marketplace/redhat-operators-9tkgm" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.902794 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:37 crc kubenswrapper[4745]: E0127 12:14:37.903322 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:38.403302061 +0000 UTC m=+171.208212749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.903397 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff89667-3b76-4571-a07b-d43bce0a2e5b-catalog-content\") pod \"redhat-operators-9tkgm\" (UID: \"7ff89667-3b76-4571-a07b-d43bce0a2e5b\") " pod="openshift-marketplace/redhat-operators-9tkgm" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.903450 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff89667-3b76-4571-a07b-d43bce0a2e5b-utilities\") pod \"redhat-operators-9tkgm\" (UID: \"7ff89667-3b76-4571-a07b-d43bce0a2e5b\") " pod="openshift-marketplace/redhat-operators-9tkgm" Jan 27 12:14:37 crc kubenswrapper[4745]: I0127 12:14:37.927255 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-224b6\" (UniqueName: \"kubernetes.io/projected/7ff89667-3b76-4571-a07b-d43bce0a2e5b-kube-api-access-224b6\") pod \"redhat-operators-9tkgm\" (UID: \"7ff89667-3b76-4571-a07b-d43bce0a2e5b\") " pod="openshift-marketplace/redhat-operators-9tkgm" Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.004577 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:38 crc kubenswrapper[4745]: E0127 12:14:38.004759 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:38.504730316 +0000 UTC m=+171.309641004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.005090 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:38 crc kubenswrapper[4745]: E0127 12:14:38.005474 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:38.505465247 +0000 UTC m=+171.310375935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.042093 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 12:09:37 +0000 UTC, rotation deadline is 2026-10-21 01:06:56.522471637 +0000 UTC Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.042203 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6396h52m18.480288024s for next certificate rotation Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.086684 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9tkgm" Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.105856 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:38 crc kubenswrapper[4745]: E0127 12:14:38.106097 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:38.606065968 +0000 UTC m=+171.410976656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.106328 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:38 crc kubenswrapper[4745]: E0127 12:14:38.106968 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:38.606960334 +0000 UTC m=+171.411871022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.180518 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2c9wm"] Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.182441 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2c9wm" Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.187309 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2c9wm"] Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.207412 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:38 crc kubenswrapper[4745]: E0127 12:14:38.207566 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:38.707536474 +0000 UTC m=+171.512447162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.208102 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.208149 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vkw5\" (UniqueName: \"kubernetes.io/projected/3fcec544-9ef8-406d-9f01-b3ceabf2b033-kube-api-access-5vkw5\") pod \"redhat-operators-2c9wm\" (UID: \"3fcec544-9ef8-406d-9f01-b3ceabf2b033\") " pod="openshift-marketplace/redhat-operators-2c9wm" Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.208249 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fcec544-9ef8-406d-9f01-b3ceabf2b033-utilities\") pod \"redhat-operators-2c9wm\" (UID: \"3fcec544-9ef8-406d-9f01-b3ceabf2b033\") " pod="openshift-marketplace/redhat-operators-2c9wm" Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.208303 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fcec544-9ef8-406d-9f01-b3ceabf2b033-catalog-content\") pod \"redhat-operators-2c9wm\" (UID: \"3fcec544-9ef8-406d-9f01-b3ceabf2b033\") " pod="openshift-marketplace/redhat-operators-2c9wm" Jan 27 12:14:38 crc kubenswrapper[4745]: E0127 12:14:38.208795 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:38.708754959 +0000 UTC m=+171.513665647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.309435 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:38 crc kubenswrapper[4745]: E0127 12:14:38.309746 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:38.809703121 +0000 UTC m=+171.614613809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.310040 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fcec544-9ef8-406d-9f01-b3ceabf2b033-utilities\") pod \"redhat-operators-2c9wm\" (UID: \"3fcec544-9ef8-406d-9f01-b3ceabf2b033\") " pod="openshift-marketplace/redhat-operators-2c9wm" Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.310138 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fcec544-9ef8-406d-9f01-b3ceabf2b033-catalog-content\") pod \"redhat-operators-2c9wm\" (UID: \"3fcec544-9ef8-406d-9f01-b3ceabf2b033\") " pod="openshift-marketplace/redhat-operators-2c9wm" Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.310180 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.310200 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vkw5\" (UniqueName: \"kubernetes.io/projected/3fcec544-9ef8-406d-9f01-b3ceabf2b033-kube-api-access-5vkw5\") pod \"redhat-operators-2c9wm\" (UID: \"3fcec544-9ef8-406d-9f01-b3ceabf2b033\") " pod="openshift-marketplace/redhat-operators-2c9wm" Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.311264 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fcec544-9ef8-406d-9f01-b3ceabf2b033-utilities\") pod \"redhat-operators-2c9wm\" (UID: \"3fcec544-9ef8-406d-9f01-b3ceabf2b033\") " pod="openshift-marketplace/redhat-operators-2c9wm" Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.311495 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fcec544-9ef8-406d-9f01-b3ceabf2b033-catalog-content\") pod \"redhat-operators-2c9wm\" (UID: \"3fcec544-9ef8-406d-9f01-b3ceabf2b033\") " pod="openshift-marketplace/redhat-operators-2c9wm" Jan 27 12:14:38 crc kubenswrapper[4745]: E0127 12:14:38.311737 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:38.811724399 +0000 UTC m=+171.616635087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.347184 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vkw5\" (UniqueName: \"kubernetes.io/projected/3fcec544-9ef8-406d-9f01-b3ceabf2b033-kube-api-access-5vkw5\") pod \"redhat-operators-2c9wm\" (UID: \"3fcec544-9ef8-406d-9f01-b3ceabf2b033\") " pod="openshift-marketplace/redhat-operators-2c9wm" Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.413623 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:38 crc kubenswrapper[4745]: E0127 12:14:38.421530 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:38.921496334 +0000 UTC m=+171.726407032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.424015 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:38 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:38 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:38 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.424081 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.476334 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9tkgm"] Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.507181 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2c9wm" Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.513434 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmx2n" event={"ID":"7c6f4dda-1294-4903-a4c1-6685307c3b25","Type":"ContainerStarted","Data":"4c36cc87e141e730d84c81478b435a07113a1804c5bf5778526ff0a68e8c51d7"} Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.515235 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:38 crc kubenswrapper[4745]: E0127 12:14:38.515731 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:39.015717823 +0000 UTC m=+171.820628511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.519358 4745 generic.go:334] "Generic (PLEG): container finished" podID="64c43381-42e2-4e01-9559-70c3c56070ea" containerID="af12074e02d034bfa4b98440c52d9a163f7c2a0063dfbc4bedb772a114b592f0" exitCode=0 Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.519661 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-255wr" event={"ID":"64c43381-42e2-4e01-9559-70c3c56070ea","Type":"ContainerDied","Data":"af12074e02d034bfa4b98440c52d9a163f7c2a0063dfbc4bedb772a114b592f0"} Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.521020 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzw5q" event={"ID":"36154dea-ca68-4ca6-8e2f-83a669152ca7","Type":"ContainerStarted","Data":"1c75497f7121f7674d8cec25a04a66b97409f90bb48a3fe99ff45e1cab5cc649"} Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.522924 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhnbq" event={"ID":"d2b41701-5113-4970-8d93-157bf16b3c06","Type":"ContainerStarted","Data":"fcbe507db2aa8230c620c38f4555203eb8e317d1f966e60a3c440bc9bb509a4a"} Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.525351 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tkgm" event={"ID":"7ff89667-3b76-4571-a07b-d43bce0a2e5b","Type":"ContainerStarted","Data":"f1fa1b7133677d5e9717cc2b4350e0c3c9e919f3dfd146e9bded94f918c61302"} Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.528389 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-l2frr_65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2/openshift-config-operator/0.log" Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.528877 4745 generic.go:334] "Generic (PLEG): container finished" podID="65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2" containerID="51721204e711d30714f91c160861fcbe9e1f7a2cd59dc67afd2147f0d9d2efab" exitCode=255 Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.528980 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" event={"ID":"65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2","Type":"ContainerDied","Data":"51721204e711d30714f91c160861fcbe9e1f7a2cd59dc67afd2147f0d9d2efab"} Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.651716 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:38 crc kubenswrapper[4745]: E0127 12:14:38.652114 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:39.152082492 +0000 UTC m=+171.956993190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.652335 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:38 crc kubenswrapper[4745]: E0127 12:14:38.652729 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:39.152717841 +0000 UTC m=+171.957628529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.753434 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:38 crc kubenswrapper[4745]: E0127 12:14:38.753638 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:39.253606311 +0000 UTC m=+172.058517009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.753860 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:38 crc kubenswrapper[4745]: E0127 12:14:38.754131 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:39.254119945 +0000 UTC m=+172.059030623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.856701 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:38 crc kubenswrapper[4745]: E0127 12:14:38.857405 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:39.357382993 +0000 UTC m=+172.162293681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.880560 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.957934 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5d9f268-24c4-4bd8-9de3-84d3eeab64c2-kube-api-access\") pod \"d5d9f268-24c4-4bd8-9de3-84d3eeab64c2\" (UID: \"d5d9f268-24c4-4bd8-9de3-84d3eeab64c2\") " Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.957968 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5d9f268-24c4-4bd8-9de3-84d3eeab64c2-kubelet-dir\") pod \"d5d9f268-24c4-4bd8-9de3-84d3eeab64c2\" (UID: \"d5d9f268-24c4-4bd8-9de3-84d3eeab64c2\") " Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.958097 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.958337 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5d9f268-24c4-4bd8-9de3-84d3eeab64c2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d5d9f268-24c4-4bd8-9de3-84d3eeab64c2" (UID: "d5d9f268-24c4-4bd8-9de3-84d3eeab64c2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:14:38 crc kubenswrapper[4745]: E0127 12:14:38.958355 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:39.458342705 +0000 UTC m=+172.263253403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.965735 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5d9f268-24c4-4bd8-9de3-84d3eeab64c2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d5d9f268-24c4-4bd8-9de3-84d3eeab64c2" (UID: "d5d9f268-24c4-4bd8-9de3-84d3eeab64c2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:14:38 crc kubenswrapper[4745]: I0127 12:14:38.979215 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2c9wm"] Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.059410 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.060136 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5d9f268-24c4-4bd8-9de3-84d3eeab64c2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.060156 4745 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5d9f268-24c4-4bd8-9de3-84d3eeab64c2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 12:14:39 crc kubenswrapper[4745]: E0127 12:14:39.060234 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:39.560211874 +0000 UTC m=+172.365122562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.064783 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.160394 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/accaeefe-6bc4-4802-a2c1-f356d6c57222-kubelet-dir\") pod \"accaeefe-6bc4-4802-a2c1-f356d6c57222\" (UID: \"accaeefe-6bc4-4802-a2c1-f356d6c57222\") " Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.160611 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/accaeefe-6bc4-4802-a2c1-f356d6c57222-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "accaeefe-6bc4-4802-a2c1-f356d6c57222" (UID: "accaeefe-6bc4-4802-a2c1-f356d6c57222"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.160651 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/accaeefe-6bc4-4802-a2c1-f356d6c57222-kube-api-access\") pod \"accaeefe-6bc4-4802-a2c1-f356d6c57222\" (UID: \"accaeefe-6bc4-4802-a2c1-f356d6c57222\") " Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.160831 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.160914 4745 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/accaeefe-6bc4-4802-a2c1-f356d6c57222-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 12:14:39 crc kubenswrapper[4745]: E0127 12:14:39.161287 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:39.661268658 +0000 UTC m=+172.466179406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.166424 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/accaeefe-6bc4-4802-a2c1-f356d6c57222-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "accaeefe-6bc4-4802-a2c1-f356d6c57222" (UID: "accaeefe-6bc4-4802-a2c1-f356d6c57222"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.181578 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ndbtg"] Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.181790 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" podUID="1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1" containerName="controller-manager" containerID="cri-o://d0b913a0cbf8c0c3b713fd52b0a4c1a1231240725e7e14e2ba572ac4250aab3f" gracePeriod=30 Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.194766 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.202937 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4"] Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.203137 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" podUID="46ec327c-832f-4a20-9b99-1aa3315c312f" containerName="route-controller-manager" containerID="cri-o://31305266d1570d01864c2f7df86ea90fac3091ac25bd456054aac3c15dc7da37" gracePeriod=30 Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.261790 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:39 crc kubenswrapper[4745]: E0127 12:14:39.261959 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:39.761937872 +0000 UTC m=+172.566848560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.262123 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.262181 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/accaeefe-6bc4-4802-a2c1-f356d6c57222-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 12:14:39 crc kubenswrapper[4745]: E0127 12:14:39.262416 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:39.762404435 +0000 UTC m=+172.567315123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:39 crc kubenswrapper[4745]: E0127 12:14:39.325577 4745 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46ec327c_832f_4a20_9b99_1aa3315c312f.slice/crio-31305266d1570d01864c2f7df86ea90fac3091ac25bd456054aac3c15dc7da37.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffaf6a7_4a55_48aa_a1aa_1ac8149dbbc1.slice/crio-d0b913a0cbf8c0c3b713fd52b0a4c1a1231240725e7e14e2ba572ac4250aab3f.scope\": RecentStats: unable to find data in memory cache]" Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.363058 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:39 crc kubenswrapper[4745]: E0127 12:14:39.363213 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:39.863187292 +0000 UTC m=+172.668097980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.363498 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:39 crc kubenswrapper[4745]: E0127 12:14:39.363775 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:39.863763979 +0000 UTC m=+172.668674667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.422522 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:39 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:39 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:39 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.422609 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.464395 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:39 crc kubenswrapper[4745]: E0127 12:14:39.464526 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:39.964508074 +0000 UTC m=+172.769418752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.464566 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:39 crc kubenswrapper[4745]: E0127 12:14:39.464851 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:39.964843224 +0000 UTC m=+172.769753912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.535246 4745 generic.go:334] "Generic (PLEG): container finished" podID="7c6f4dda-1294-4903-a4c1-6685307c3b25" containerID="4c36cc87e141e730d84c81478b435a07113a1804c5bf5778526ff0a68e8c51d7" exitCode=0 Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.535344 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmx2n" event={"ID":"7c6f4dda-1294-4903-a4c1-6685307c3b25","Type":"ContainerDied","Data":"4c36cc87e141e730d84c81478b435a07113a1804c5bf5778526ff0a68e8c51d7"} Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.537578 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d5d9f268-24c4-4bd8-9de3-84d3eeab64c2","Type":"ContainerDied","Data":"353895853b9cbd116411328eb7e0fc0e3a251ec51868a222d18b8fa69faf9b6d"} Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.537955 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="353895853b9cbd116411328eb7e0fc0e3a251ec51868a222d18b8fa69faf9b6d" Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.537856 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.537675 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.539826 4745 generic.go:334] "Generic (PLEG): container finished" podID="6d114857-b077-4798-b578-b9a15645d31f" containerID="b0fba6408b0f57c8898eb7f4cac3b045069835e3b9de3b1e38d096a46bacd018" exitCode=0 Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.539915 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hw272" event={"ID":"6d114857-b077-4798-b578-b9a15645d31f","Type":"ContainerDied","Data":"b0fba6408b0f57c8898eb7f4cac3b045069835e3b9de3b1e38d096a46bacd018"} Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.542763 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"accaeefe-6bc4-4802-a2c1-f356d6c57222","Type":"ContainerDied","Data":"9cc0415db17b0d9352138f2d5a83999ba839075fccdc3867b6b5df092be30047"} Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.542857 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cc0415db17b0d9352138f2d5a83999ba839075fccdc3867b6b5df092be30047" Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.542947 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.550171 4745 generic.go:334] "Generic (PLEG): container finished" podID="341a8942-834f-4f76-8269-7ecdecaaa1b0" containerID="7da42ada67eb4adfeffd31c903ae1d8cf259f32f62fb3a7ff17fcd50e65d9357" exitCode=0 Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.550231 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzw6b" event={"ID":"341a8942-834f-4f76-8269-7ecdecaaa1b0","Type":"ContainerDied","Data":"7da42ada67eb4adfeffd31c903ae1d8cf259f32f62fb3a7ff17fcd50e65d9357"} Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.551667 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c9wm" event={"ID":"3fcec544-9ef8-406d-9f01-b3ceabf2b033","Type":"ContainerStarted","Data":"d7817b625e2c1db57be561c5ebd912730f6f4a2035bb8ebfb6bbc059c7d90a83"} Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.554872 4745 generic.go:334] "Generic (PLEG): container finished" podID="d2b41701-5113-4970-8d93-157bf16b3c06" containerID="fcbe507db2aa8230c620c38f4555203eb8e317d1f966e60a3c440bc9bb509a4a" exitCode=0 Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.555610 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhnbq" event={"ID":"d2b41701-5113-4970-8d93-157bf16b3c06","Type":"ContainerDied","Data":"fcbe507db2aa8230c620c38f4555203eb8e317d1f966e60a3c440bc9bb509a4a"} Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.565894 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:39 crc kubenswrapper[4745]: E0127 12:14:39.566261 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:40.066221348 +0000 UTC m=+172.871132046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.566340 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:39 crc kubenswrapper[4745]: E0127 12:14:39.566617 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:40.066610159 +0000 UTC m=+172.871520847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.667403 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:39 crc kubenswrapper[4745]: E0127 12:14:39.667585 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:40.167562081 +0000 UTC m=+172.972472769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.667855 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:39 crc kubenswrapper[4745]: E0127 12:14:39.668159 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:40.168151668 +0000 UTC m=+172.973062356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.768787 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:39 crc kubenswrapper[4745]: E0127 12:14:39.769020 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:40.268965636 +0000 UTC m=+173.073876334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.870125 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:39 crc kubenswrapper[4745]: E0127 12:14:39.870447 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:40.370432472 +0000 UTC m=+173.175343160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.970748 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:39 crc kubenswrapper[4745]: E0127 12:14:39.971231 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:40.471199049 +0000 UTC m=+173.276109737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:39 crc kubenswrapper[4745]: I0127 12:14:39.971500 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:39 crc kubenswrapper[4745]: E0127 12:14:39.971871 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:40.471863318 +0000 UTC m=+173.276774006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.072884 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:40 crc kubenswrapper[4745]: E0127 12:14:40.073177 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:40.573153809 +0000 UTC m=+173.378064497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.073229 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:40 crc kubenswrapper[4745]: E0127 12:14:40.073531 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:40.57351776 +0000 UTC m=+173.378428448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.174660 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:40 crc kubenswrapper[4745]: E0127 12:14:40.174846 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:40.674818562 +0000 UTC m=+173.479729250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.175162 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:40 crc kubenswrapper[4745]: E0127 12:14:40.175492 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:40.675484361 +0000 UTC m=+173.480395049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.276862 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:40 crc kubenswrapper[4745]: E0127 12:14:40.277012 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:40.776981938 +0000 UTC m=+173.581892636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.277775 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:40 crc kubenswrapper[4745]: E0127 12:14:40.278663 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:40.778631245 +0000 UTC m=+173.583541973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.384502 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:40 crc kubenswrapper[4745]: E0127 12:14:40.384703 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:40.884673534 +0000 UTC m=+173.689584232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.385387 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:40 crc kubenswrapper[4745]: E0127 12:14:40.385714 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:40.885703793 +0000 UTC m=+173.690614491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.419713 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:40 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:40 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:40 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.419784 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.487029 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:40 crc kubenswrapper[4745]: E0127 12:14:40.487334 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:40.987300433 +0000 UTC m=+173.792211121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.513434 4745 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-l2frr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.513895 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" podUID="65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.562216 4745 generic.go:334] "Generic (PLEG): container finished" podID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" containerID="36deef2649ebaf45ebd297dfb2f2ec9a31c6fa227ac0d1a57bafa1292007315d" exitCode=0 Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.562285 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tkgm" event={"ID":"7ff89667-3b76-4571-a07b-d43bce0a2e5b","Type":"ContainerDied","Data":"36deef2649ebaf45ebd297dfb2f2ec9a31c6fa227ac0d1a57bafa1292007315d"} Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.565312 4745 generic.go:334] "Generic (PLEG): container finished" podID="1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1" containerID="d0b913a0cbf8c0c3b713fd52b0a4c1a1231240725e7e14e2ba572ac4250aab3f" exitCode=0 Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.565407 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" event={"ID":"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1","Type":"ContainerDied","Data":"d0b913a0cbf8c0c3b713fd52b0a4c1a1231240725e7e14e2ba572ac4250aab3f"} Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.567657 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-swntl" event={"ID":"c1811fa8-9015-4fe0-8fad-2461d64cdffd","Type":"ContainerStarted","Data":"7786cf6e7a54cec180daf921dd52916006900d82493c005d6e607800cadc0f6b"} Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.569039 4745 generic.go:334] "Generic (PLEG): container finished" podID="36154dea-ca68-4ca6-8e2f-83a669152ca7" containerID="700ca73d62008821a7de19a80dd7da7992a9973cda13b3baa939ec0253ac71ac" exitCode=0 Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.569060 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzw5q" event={"ID":"36154dea-ca68-4ca6-8e2f-83a669152ca7","Type":"ContainerDied","Data":"700ca73d62008821a7de19a80dd7da7992a9973cda13b3baa939ec0253ac71ac"} Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.570325 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c9wm" event={"ID":"3fcec544-9ef8-406d-9f01-b3ceabf2b033","Type":"ContainerStarted","Data":"cffd45185cc0d7c532f64293bda76d21e9e6dbf69c51d75a9634c56ed140a06e"} Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.588923 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:40 crc kubenswrapper[4745]: E0127 12:14:40.590311 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:41.090297254 +0000 UTC m=+173.895207942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.689615 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:40 crc kubenswrapper[4745]: E0127 12:14:40.690075 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:41.190041251 +0000 UTC m=+173.994951969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.791301 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:40 crc kubenswrapper[4745]: E0127 12:14:40.791693 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:41.291669462 +0000 UTC m=+174.096580150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.892692 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:40 crc kubenswrapper[4745]: E0127 12:14:40.892847 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:41.392805059 +0000 UTC m=+174.197715747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.892900 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:40 crc kubenswrapper[4745]: E0127 12:14:40.893204 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:41.39319193 +0000 UTC m=+174.198102618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.995252 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:40 crc kubenswrapper[4745]: E0127 12:14:40.995476 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:41.495440389 +0000 UTC m=+174.300351087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:40 crc kubenswrapper[4745]: I0127 12:14:40.995729 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:40 crc kubenswrapper[4745]: E0127 12:14:40.996172 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:41.4961614 +0000 UTC m=+174.301072098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.097082 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:41 crc kubenswrapper[4745]: E0127 12:14:41.097453 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:41.59740531 +0000 UTC m=+174.402316008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.097518 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:41 crc kubenswrapper[4745]: E0127 12:14:41.098290 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:41.598274475 +0000 UTC m=+174.403185353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.198309 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:41 crc kubenswrapper[4745]: E0127 12:14:41.198653 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:41.69864001 +0000 UTC m=+174.503550688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.299685 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:41 crc kubenswrapper[4745]: E0127 12:14:41.300060 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:41.800045355 +0000 UTC m=+174.604956043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.401315 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:41 crc kubenswrapper[4745]: E0127 12:14:41.401528 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:41.9014848 +0000 UTC m=+174.706395488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.401699 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:41 crc kubenswrapper[4745]: E0127 12:14:41.402196 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:41.902187431 +0000 UTC m=+174.707098119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.422531 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:41 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:41 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:41 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.422624 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.503085 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:41 crc kubenswrapper[4745]: E0127 12:14:41.503531 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:42.003469672 +0000 UTC m=+174.808380400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.579901 4745 generic.go:334] "Generic (PLEG): container finished" podID="46ec327c-832f-4a20-9b99-1aa3315c312f" containerID="31305266d1570d01864c2f7df86ea90fac3091ac25bd456054aac3c15dc7da37" exitCode=0 Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.580028 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" event={"ID":"46ec327c-832f-4a20-9b99-1aa3315c312f","Type":"ContainerDied","Data":"31305266d1570d01864c2f7df86ea90fac3091ac25bd456054aac3c15dc7da37"} Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.582530 4745 generic.go:334] "Generic (PLEG): container finished" podID="3fcec544-9ef8-406d-9f01-b3ceabf2b033" containerID="cffd45185cc0d7c532f64293bda76d21e9e6dbf69c51d75a9634c56ed140a06e" exitCode=0 Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.582570 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c9wm" event={"ID":"3fcec544-9ef8-406d-9f01-b3ceabf2b033","Type":"ContainerDied","Data":"cffd45185cc0d7c532f64293bda76d21e9e6dbf69c51d75a9634c56ed140a06e"} Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.605233 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:41 crc kubenswrapper[4745]: E0127 12:14:41.605640 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:42.105621618 +0000 UTC m=+174.910532306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.706138 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:41 crc kubenswrapper[4745]: E0127 12:14:41.706371 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:42.206350312 +0000 UTC m=+175.011261010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.706426 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:41 crc kubenswrapper[4745]: E0127 12:14:41.707154 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:42.207119005 +0000 UTC m=+175.012029713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.710967 4745 patch_prober.go:28] interesting pod/apiserver-76f77b778f-tf24j container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 12:14:41 crc kubenswrapper[4745]: [+]log ok Jan 27 12:14:41 crc kubenswrapper[4745]: [+]etcd ok Jan 27 12:14:41 crc kubenswrapper[4745]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 12:14:41 crc kubenswrapper[4745]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 12:14:41 crc kubenswrapper[4745]: [+]poststarthook/max-in-flight-filter ok Jan 27 12:14:41 crc kubenswrapper[4745]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 12:14:41 crc kubenswrapper[4745]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 27 12:14:41 crc kubenswrapper[4745]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 27 12:14:41 crc kubenswrapper[4745]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 27 12:14:41 crc kubenswrapper[4745]: [+]poststarthook/project.openshift.io-projectcache ok Jan 27 12:14:41 crc kubenswrapper[4745]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 27 12:14:41 crc kubenswrapper[4745]: [+]poststarthook/openshift.io-startinformers ok Jan 27 12:14:41 crc kubenswrapper[4745]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 27 12:14:41 crc kubenswrapper[4745]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 12:14:41 crc kubenswrapper[4745]: livez check failed Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.711031 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-tf24j" podUID="cef45995-4242-499f-adeb-cc12aa630b5c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.807285 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:41 crc kubenswrapper[4745]: E0127 12:14:41.807446 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:42.307416047 +0000 UTC m=+175.112326735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.807546 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:41 crc kubenswrapper[4745]: E0127 12:14:41.807889 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:42.307878991 +0000 UTC m=+175.112789679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.909151 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:41 crc kubenswrapper[4745]: E0127 12:14:41.909370 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:42.409328197 +0000 UTC m=+175.214238885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:41 crc kubenswrapper[4745]: I0127 12:14:41.909464 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:41 crc kubenswrapper[4745]: E0127 12:14:41.909749 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:42.409742779 +0000 UTC m=+175.214653467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:42 crc kubenswrapper[4745]: I0127 12:14:42.011155 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:42 crc kubenswrapper[4745]: E0127 12:14:42.011348 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:42.511319858 +0000 UTC m=+175.316230546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:42 crc kubenswrapper[4745]: I0127 12:14:42.011578 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:42 crc kubenswrapper[4745]: E0127 12:14:42.011937 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:42.511926466 +0000 UTC m=+175.316837154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:42 crc kubenswrapper[4745]: I0127 12:14:42.112456 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:42 crc kubenswrapper[4745]: E0127 12:14:42.112674 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:42.612638861 +0000 UTC m=+175.417549559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:42 crc kubenswrapper[4745]: I0127 12:14:42.112763 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:42 crc kubenswrapper[4745]: E0127 12:14:42.113161 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:42.613151305 +0000 UTC m=+175.418062003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:42 crc kubenswrapper[4745]: I0127 12:14:42.214051 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:42 crc kubenswrapper[4745]: E0127 12:14:42.214341 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:42.714306213 +0000 UTC m=+175.519216911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:42 crc kubenswrapper[4745]: I0127 12:14:42.214517 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:42 crc kubenswrapper[4745]: E0127 12:14:42.215005 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:42.714994453 +0000 UTC m=+175.519905141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:42 crc kubenswrapper[4745]: I0127 12:14:42.316146 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:42 crc kubenswrapper[4745]: E0127 12:14:42.316449 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:42.816406278 +0000 UTC m=+175.621316956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:42 crc kubenswrapper[4745]: I0127 12:14:42.316763 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:42 crc kubenswrapper[4745]: E0127 12:14:42.317272 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:42.817263422 +0000 UTC m=+175.622174110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:42 crc kubenswrapper[4745]: I0127 12:14:42.418484 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:42 crc kubenswrapper[4745]: E0127 12:14:42.418938 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:42.918883473 +0000 UTC m=+175.723794181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:42 crc kubenswrapper[4745]: I0127 12:14:42.419074 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:42 crc kubenswrapper[4745]: E0127 12:14:42.419461 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:42.919442569 +0000 UTC m=+175.724353257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:42 crc kubenswrapper[4745]: I0127 12:14:42.420074 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:42 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:42 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:42 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:42 crc kubenswrapper[4745]: I0127 12:14:42.420120 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:42 crc kubenswrapper[4745]: I0127 12:14:42.520577 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:42 crc kubenswrapper[4745]: E0127 12:14:42.520975 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:43.020932107 +0000 UTC m=+175.825842795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:42 crc kubenswrapper[4745]: I0127 12:14:42.622895 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:42 crc kubenswrapper[4745]: E0127 12:14:42.623615 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:43.123595117 +0000 UTC m=+175.928505805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:42 crc kubenswrapper[4745]: I0127 12:14:42.724877 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:42 crc kubenswrapper[4745]: E0127 12:14:42.725309 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:43.225292521 +0000 UTC m=+176.030203209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:42 crc kubenswrapper[4745]: I0127 12:14:42.827227 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:42 crc kubenswrapper[4745]: E0127 12:14:42.827770 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:43.327746205 +0000 UTC m=+176.132656893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:42 crc kubenswrapper[4745]: I0127 12:14:42.928557 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:42 crc kubenswrapper[4745]: E0127 12:14:42.928725 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:43.428706237 +0000 UTC m=+176.233616925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:42 crc kubenswrapper[4745]: I0127 12:14:42.929032 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:42 crc kubenswrapper[4745]: E0127 12:14:42.929450 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:43.429433788 +0000 UTC m=+176.234344476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.030944 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:43 crc kubenswrapper[4745]: E0127 12:14:43.031250 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:43.531211094 +0000 UTC m=+176.336121822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.031356 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:43 crc kubenswrapper[4745]: E0127 12:14:43.031962 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:43.531923054 +0000 UTC m=+176.336833792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.132274 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:43 crc kubenswrapper[4745]: E0127 12:14:43.132792 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:43.632747802 +0000 UTC m=+176.437658520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.233852 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:43 crc kubenswrapper[4745]: E0127 12:14:43.234432 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:43.734418745 +0000 UTC m=+176.539329433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.335797 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:43 crc kubenswrapper[4745]: E0127 12:14:43.336083 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:43.836053326 +0000 UTC m=+176.640964014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.336183 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:43 crc kubenswrapper[4745]: E0127 12:14:43.336656 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:43.836647043 +0000 UTC m=+176.641557731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.421404 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:43 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:43 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:43 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.421932 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.437661 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:43 crc kubenswrapper[4745]: E0127 12:14:43.438119 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:43.938100049 +0000 UTC m=+176.743010747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.512336 4745 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-l2frr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.512391 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" podUID="65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.539842 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:43 crc kubenswrapper[4745]: E0127 12:14:43.540226 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:44.040211924 +0000 UTC m=+176.845122612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.598970 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-l2frr_65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2/openshift-config-operator/0.log" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.601318 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" event={"ID":"65e3ba78-9bdc-41bc-ad3f-ddccbf79c6c2","Type":"ContainerStarted","Data":"94963dfb12d2f8393eed072d365564696447b1c5b226005cf859b884e9ca0cc8"} Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.641009 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:43 crc kubenswrapper[4745]: E0127 12:14:43.641166 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:44.141146946 +0000 UTC m=+176.946057634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.641282 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:43 crc kubenswrapper[4745]: E0127 12:14:43.641642 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:44.1416339 +0000 UTC m=+176.946544588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.647996 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-swntl" podStartSLOduration=153.647977302 podStartE2EDuration="2m33.647977302s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:43.645792149 +0000 UTC m=+176.450702837" watchObservedRunningTime="2026-01-27 12:14:43.647977302 +0000 UTC m=+176.452888000" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.742304 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:43 crc kubenswrapper[4745]: E0127 12:14:43.742773 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:44.242759646 +0000 UTC m=+177.047670334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.765428 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.794141 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8"] Jan 27 12:14:43 crc kubenswrapper[4745]: E0127 12:14:43.794344 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="accaeefe-6bc4-4802-a2c1-f356d6c57222" containerName="pruner" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.794357 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="accaeefe-6bc4-4802-a2c1-f356d6c57222" containerName="pruner" Jan 27 12:14:43 crc kubenswrapper[4745]: E0127 12:14:43.794368 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ec327c-832f-4a20-9b99-1aa3315c312f" containerName="route-controller-manager" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.794375 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ec327c-832f-4a20-9b99-1aa3315c312f" containerName="route-controller-manager" Jan 27 12:14:43 crc kubenswrapper[4745]: E0127 12:14:43.794385 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5d9f268-24c4-4bd8-9de3-84d3eeab64c2" containerName="pruner" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.794391 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5d9f268-24c4-4bd8-9de3-84d3eeab64c2" containerName="pruner" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.794482 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="accaeefe-6bc4-4802-a2c1-f356d6c57222" containerName="pruner" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.794493 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ec327c-832f-4a20-9b99-1aa3315c312f" containerName="route-controller-manager" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.794505 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5d9f268-24c4-4bd8-9de3-84d3eeab64c2" containerName="pruner" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.794973 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.807580 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8"] Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.846192 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46ec327c-832f-4a20-9b99-1aa3315c312f-serving-cert\") pod \"46ec327c-832f-4a20-9b99-1aa3315c312f\" (UID: \"46ec327c-832f-4a20-9b99-1aa3315c312f\") " Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.846315 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ec327c-832f-4a20-9b99-1aa3315c312f-config\") pod \"46ec327c-832f-4a20-9b99-1aa3315c312f\" (UID: \"46ec327c-832f-4a20-9b99-1aa3315c312f\") " Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.846358 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6swl\" (UniqueName: \"kubernetes.io/projected/46ec327c-832f-4a20-9b99-1aa3315c312f-kube-api-access-c6swl\") pod \"46ec327c-832f-4a20-9b99-1aa3315c312f\" (UID: \"46ec327c-832f-4a20-9b99-1aa3315c312f\") " Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.846382 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46ec327c-832f-4a20-9b99-1aa3315c312f-client-ca\") pod \"46ec327c-832f-4a20-9b99-1aa3315c312f\" (UID: \"46ec327c-832f-4a20-9b99-1aa3315c312f\") " Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.846737 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6067d6a1-495e-4980-8189-ccdc43df1a37-serving-cert\") pod \"route-controller-manager-548d5b954d-pxxs8\" (UID: \"6067d6a1-495e-4980-8189-ccdc43df1a37\") " pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.846787 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.846839 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wsqh\" (UniqueName: \"kubernetes.io/projected/6067d6a1-495e-4980-8189-ccdc43df1a37-kube-api-access-5wsqh\") pod \"route-controller-manager-548d5b954d-pxxs8\" (UID: \"6067d6a1-495e-4980-8189-ccdc43df1a37\") " pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.846870 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6067d6a1-495e-4980-8189-ccdc43df1a37-client-ca\") pod \"route-controller-manager-548d5b954d-pxxs8\" (UID: \"6067d6a1-495e-4980-8189-ccdc43df1a37\") " pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.846914 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6067d6a1-495e-4980-8189-ccdc43df1a37-config\") pod \"route-controller-manager-548d5b954d-pxxs8\" (UID: \"6067d6a1-495e-4980-8189-ccdc43df1a37\") " pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.847716 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46ec327c-832f-4a20-9b99-1aa3315c312f-client-ca" (OuterVolumeSpecName: "client-ca") pod "46ec327c-832f-4a20-9b99-1aa3315c312f" (UID: "46ec327c-832f-4a20-9b99-1aa3315c312f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:14:43 crc kubenswrapper[4745]: E0127 12:14:43.848019 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:44.348006281 +0000 UTC m=+177.152916969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.848669 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46ec327c-832f-4a20-9b99-1aa3315c312f-config" (OuterVolumeSpecName: "config") pod "46ec327c-832f-4a20-9b99-1aa3315c312f" (UID: "46ec327c-832f-4a20-9b99-1aa3315c312f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.854708 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ec327c-832f-4a20-9b99-1aa3315c312f-kube-api-access-c6swl" (OuterVolumeSpecName: "kube-api-access-c6swl") pod "46ec327c-832f-4a20-9b99-1aa3315c312f" (UID: "46ec327c-832f-4a20-9b99-1aa3315c312f"). InnerVolumeSpecName "kube-api-access-c6swl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.860892 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ec327c-832f-4a20-9b99-1aa3315c312f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "46ec327c-832f-4a20-9b99-1aa3315c312f" (UID: "46ec327c-832f-4a20-9b99-1aa3315c312f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.888275 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.947908 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-config\") pod \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\" (UID: \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\") " Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.948007 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-serving-cert\") pod \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\" (UID: \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\") " Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.948071 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-proxy-ca-bundles\") pod \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\" (UID: \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\") " Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.948182 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.948215 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-client-ca\") pod \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\" (UID: \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\") " Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.948233 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldshw\" (UniqueName: \"kubernetes.io/projected/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-kube-api-access-ldshw\") pod \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\" (UID: \"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1\") " Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.948351 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6067d6a1-495e-4980-8189-ccdc43df1a37-serving-cert\") pod \"route-controller-manager-548d5b954d-pxxs8\" (UID: \"6067d6a1-495e-4980-8189-ccdc43df1a37\") " pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.948389 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wsqh\" (UniqueName: \"kubernetes.io/projected/6067d6a1-495e-4980-8189-ccdc43df1a37-kube-api-access-5wsqh\") pod \"route-controller-manager-548d5b954d-pxxs8\" (UID: \"6067d6a1-495e-4980-8189-ccdc43df1a37\") " pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.948408 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6067d6a1-495e-4980-8189-ccdc43df1a37-client-ca\") pod \"route-controller-manager-548d5b954d-pxxs8\" (UID: \"6067d6a1-495e-4980-8189-ccdc43df1a37\") " pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.948440 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6067d6a1-495e-4980-8189-ccdc43df1a37-config\") pod \"route-controller-manager-548d5b954d-pxxs8\" (UID: \"6067d6a1-495e-4980-8189-ccdc43df1a37\") " pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.948502 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ec327c-832f-4a20-9b99-1aa3315c312f-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.948516 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6swl\" (UniqueName: \"kubernetes.io/projected/46ec327c-832f-4a20-9b99-1aa3315c312f-kube-api-access-c6swl\") on node \"crc\" DevicePath \"\"" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.948524 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46ec327c-832f-4a20-9b99-1aa3315c312f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.948533 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46ec327c-832f-4a20-9b99-1aa3315c312f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:14:43 crc kubenswrapper[4745]: E0127 12:14:43.948986 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:44.448953073 +0000 UTC m=+177.253863771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.949333 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-client-ca" (OuterVolumeSpecName: "client-ca") pod "1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1" (UID: "1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.949708 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-config" (OuterVolumeSpecName: "config") pod "1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1" (UID: "1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.950015 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6067d6a1-495e-4980-8189-ccdc43df1a37-client-ca\") pod \"route-controller-manager-548d5b954d-pxxs8\" (UID: \"6067d6a1-495e-4980-8189-ccdc43df1a37\") " pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.950902 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1" (UID: "1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.952695 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-kube-api-access-ldshw" (OuterVolumeSpecName: "kube-api-access-ldshw") pod "1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1" (UID: "1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1"). InnerVolumeSpecName "kube-api-access-ldshw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.952803 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6067d6a1-495e-4980-8189-ccdc43df1a37-config\") pod \"route-controller-manager-548d5b954d-pxxs8\" (UID: \"6067d6a1-495e-4980-8189-ccdc43df1a37\") " pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.954192 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1" (UID: "1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.955309 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6067d6a1-495e-4980-8189-ccdc43df1a37-serving-cert\") pod \"route-controller-manager-548d5b954d-pxxs8\" (UID: \"6067d6a1-495e-4980-8189-ccdc43df1a37\") " pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" Jan 27 12:14:43 crc kubenswrapper[4745]: I0127 12:14:43.965585 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wsqh\" (UniqueName: \"kubernetes.io/projected/6067d6a1-495e-4980-8189-ccdc43df1a37-kube-api-access-5wsqh\") pod \"route-controller-manager-548d5b954d-pxxs8\" (UID: \"6067d6a1-495e-4980-8189-ccdc43df1a37\") " pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.049651 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:44 crc kubenswrapper[4745]: E0127 12:14:44.050079 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:44.550063929 +0000 UTC m=+177.354974617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.050568 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.050598 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.050620 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldshw\" (UniqueName: \"kubernetes.io/projected/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-kube-api-access-ldshw\") on node \"crc\" DevicePath \"\"" Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.050633 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.050644 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.110006 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.151985 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:44 crc kubenswrapper[4745]: E0127 12:14:44.152174 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:44.652146464 +0000 UTC m=+177.457057152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.152335 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:44 crc kubenswrapper[4745]: E0127 12:14:44.152669 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:44.652652808 +0000 UTC m=+177.457563496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.254367 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:44 crc kubenswrapper[4745]: E0127 12:14:44.254926 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:44.754344961 +0000 UTC m=+177.559255679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.255076 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:44 crc kubenswrapper[4745]: E0127 12:14:44.256481 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:44.756454392 +0000 UTC m=+177.561365120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.299623 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8"] Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.355643 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:44 crc kubenswrapper[4745]: E0127 12:14:44.356082 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:44.856064895 +0000 UTC m=+177.660975583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.421004 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:44 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:44 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:44 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.421065 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.458968 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:44 crc kubenswrapper[4745]: E0127 12:14:44.459455 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:44.959437996 +0000 UTC m=+177.764348694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.559834 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:44 crc kubenswrapper[4745]: E0127 12:14:44.559986 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:45.059971266 +0000 UTC m=+177.864881954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.560020 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:44 crc kubenswrapper[4745]: E0127 12:14:44.560330 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:45.060321596 +0000 UTC m=+177.865232284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.608852 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" event={"ID":"6067d6a1-495e-4980-8189-ccdc43df1a37","Type":"ContainerStarted","Data":"e4060e9e94cd30f5b52cbd5d39cb28c4f22eb026846b8da47553e8a84e5098f1"} Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.610730 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" event={"ID":"46ec327c-832f-4a20-9b99-1aa3315c312f","Type":"ContainerDied","Data":"af2a7acf865056a177cde9e5acabb19333c29ce1e1aaba36f630fd42c880bb45"} Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.610766 4745 scope.go:117] "RemoveContainer" containerID="31305266d1570d01864c2f7df86ea90fac3091ac25bd456054aac3c15dc7da37" Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.610897 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4" Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.613711 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.613700 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ndbtg" event={"ID":"1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1","Type":"ContainerDied","Data":"9caa386630c9d78a180790c3b772b585eff7964cc4abf5f978cb505f8b857542"} Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.614077 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.624844 4745 scope.go:117] "RemoveContainer" containerID="d0b913a0cbf8c0c3b713fd52b0a4c1a1231240725e7e14e2ba572ac4250aab3f" Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.654438 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4"] Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.657484 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8nsr4"] Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.661421 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:44 crc kubenswrapper[4745]: E0127 12:14:44.661580 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:45.161547996 +0000 UTC m=+177.966458684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.661707 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:44 crc kubenswrapper[4745]: E0127 12:14:44.662027 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:45.162018809 +0000 UTC m=+177.966929497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.664800 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ndbtg"] Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.668081 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ndbtg"] Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.762668 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:44 crc kubenswrapper[4745]: E0127 12:14:44.762842 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:45.262791286 +0000 UTC m=+178.067701974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.763396 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:44 crc kubenswrapper[4745]: E0127 12:14:44.763847 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:45.263826335 +0000 UTC m=+178.068737043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.864063 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:44 crc kubenswrapper[4745]: E0127 12:14:44.864222 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:45.364195741 +0000 UTC m=+178.169106439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.864396 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:44 crc kubenswrapper[4745]: E0127 12:14:44.864759 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:45.364748576 +0000 UTC m=+178.169659284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.966095 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:44 crc kubenswrapper[4745]: E0127 12:14:44.966256 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:45.466228823 +0000 UTC m=+178.271139511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:44 crc kubenswrapper[4745]: I0127 12:14:44.966344 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:44 crc kubenswrapper[4745]: E0127 12:14:44.966641 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:45.466629575 +0000 UTC m=+178.271540263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:45 crc kubenswrapper[4745]: I0127 12:14:45.083469 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:45 crc kubenswrapper[4745]: E0127 12:14:45.084423 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:45.584383789 +0000 UTC m=+178.389294497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:45 crc kubenswrapper[4745]: I0127 12:14:45.185887 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:45 crc kubenswrapper[4745]: E0127 12:14:45.186555 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:45.686536286 +0000 UTC m=+178.491446974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:45 crc kubenswrapper[4745]: I0127 12:14:45.287035 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:45 crc kubenswrapper[4745]: E0127 12:14:45.287325 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:45.787220009 +0000 UTC m=+178.592186328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:45 crc kubenswrapper[4745]: I0127 12:14:45.287432 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:45 crc kubenswrapper[4745]: E0127 12:14:45.287978 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:45.78795864 +0000 UTC m=+178.592869348 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:45 crc kubenswrapper[4745]: I0127 12:14:45.389009 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:45 crc kubenswrapper[4745]: E0127 12:14:45.389298 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:45.889279162 +0000 UTC m=+178.694189850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:45 crc kubenswrapper[4745]: I0127 12:14:45.420740 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:45 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:45 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:45 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:45 crc kubenswrapper[4745]: I0127 12:14:45.420842 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:45 crc kubenswrapper[4745]: I0127 12:14:45.490203 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:45 crc kubenswrapper[4745]: E0127 12:14:45.490526 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:45.990511582 +0000 UTC m=+178.795422270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:45 crc kubenswrapper[4745]: I0127 12:14:45.591406 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:45 crc kubenswrapper[4745]: E0127 12:14:45.591705 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:46.09168845 +0000 UTC m=+178.896599138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:45 crc kubenswrapper[4745]: I0127 12:14:45.619684 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" event={"ID":"6067d6a1-495e-4980-8189-ccdc43df1a37","Type":"ContainerStarted","Data":"1f583b0a83e7790c196d2e73af9102881bb3f3d6a420889e1bc360384e322004"} Jan 27 12:14:45 crc kubenswrapper[4745]: I0127 12:14:45.692761 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:45 crc kubenswrapper[4745]: E0127 12:14:45.693405 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:46.193378383 +0000 UTC m=+178.998289071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:45 crc kubenswrapper[4745]: I0127 12:14:45.794009 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:45 crc kubenswrapper[4745]: E0127 12:14:45.794359 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:46.294339745 +0000 UTC m=+179.099250433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:45 crc kubenswrapper[4745]: I0127 12:14:45.897432 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:45 crc kubenswrapper[4745]: E0127 12:14:45.897835 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:46.39781817 +0000 UTC m=+179.202728858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:45 crc kubenswrapper[4745]: I0127 12:14:45.999450 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:45 crc kubenswrapper[4745]: E0127 12:14:45.999664 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:46.499628556 +0000 UTC m=+179.304539244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:45 crc kubenswrapper[4745]: I0127 12:14:45.999771 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:46 crc kubenswrapper[4745]: E0127 12:14:46.000224 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:46.500211573 +0000 UTC m=+179.305122271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.080921 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1" path="/var/lib/kubelet/pods/1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1/volumes" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.081655 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46ec327c-832f-4a20-9b99-1aa3315c312f" path="/var/lib/kubelet/pods/46ec327c-832f-4a20-9b99-1aa3315c312f/volumes" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.101233 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:46 crc kubenswrapper[4745]: E0127 12:14:46.101432 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:46.601401581 +0000 UTC m=+179.406312269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.101586 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:46 crc kubenswrapper[4745]: E0127 12:14:46.101999 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:46.601989418 +0000 UTC m=+179.406900106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.202381 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:46 crc kubenswrapper[4745]: E0127 12:14:46.202711 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:46.702637131 +0000 UTC m=+179.507547819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.304957 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:46 crc kubenswrapper[4745]: E0127 12:14:46.305333 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:46.805315813 +0000 UTC m=+179.610226501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.406572 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:46 crc kubenswrapper[4745]: E0127 12:14:46.406979 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:46.906948684 +0000 UTC m=+179.711859382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.419651 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:46 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:46 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:46 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.419714 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.508725 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:46 crc kubenswrapper[4745]: E0127 12:14:46.509053 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:47.009040728 +0000 UTC m=+179.813951416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.569212 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.569267 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.569531 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-hbsbc" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.570066 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.570092 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.570171 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"92f38c5b389b104b9805b291f8142622c2958d40dec44e8dffe0957115aae9c3"} pod="openshift-console/downloads-7954f5f757-hbsbc" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.570212 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" containerID="cri-o://92f38c5b389b104b9805b291f8142622c2958d40dec44e8dffe0957115aae9c3" gracePeriod=2 Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.570407 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.570476 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.578760 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l2frr" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.610549 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:46 crc kubenswrapper[4745]: E0127 12:14:46.610785 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:47.110752403 +0000 UTC m=+179.915663091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.610950 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:46 crc kubenswrapper[4745]: E0127 12:14:46.611327 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:47.111319109 +0000 UTC m=+179.916229797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.626574 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.639824 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.648219 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" podStartSLOduration=7.648185568 podStartE2EDuration="7.648185568s" podCreationTimestamp="2026-01-27 12:14:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:46.64757353 +0000 UTC m=+179.452484218" watchObservedRunningTime="2026-01-27 12:14:46.648185568 +0000 UTC m=+179.453096256" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.663983 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5bdff47b56-cnbn4"] Jan 27 12:14:46 crc kubenswrapper[4745]: E0127 12:14:46.664200 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1" containerName="controller-manager" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.664213 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1" containerName="controller-manager" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.664311 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ffaf6a7-4a55-48aa-a1aa-1ac8149dbbc1" containerName="controller-manager" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.664639 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.669783 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.669957 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.669975 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.670508 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.670531 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.671542 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.679373 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.680513 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bdff47b56-cnbn4"] Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.711980 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.712479 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:46 crc kubenswrapper[4745]: E0127 12:14:46.712576 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:47.212549068 +0000 UTC m=+180.017459756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.712727 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85c1cb8-c1a7-4752-b530-c4d21eb39817-config\") pod \"controller-manager-5bdff47b56-cnbn4\" (UID: \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\") " pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.712761 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a85c1cb8-c1a7-4752-b530-c4d21eb39817-proxy-ca-bundles\") pod \"controller-manager-5bdff47b56-cnbn4\" (UID: \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\") " pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.713166 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p4kx\" (UniqueName: \"kubernetes.io/projected/a85c1cb8-c1a7-4752-b530-c4d21eb39817-kube-api-access-6p4kx\") pod \"controller-manager-5bdff47b56-cnbn4\" (UID: \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\") " pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.713276 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.713331 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a85c1cb8-c1a7-4752-b530-c4d21eb39817-serving-cert\") pod \"controller-manager-5bdff47b56-cnbn4\" (UID: \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\") " pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.713350 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a85c1cb8-c1a7-4752-b530-c4d21eb39817-client-ca\") pod \"controller-manager-5bdff47b56-cnbn4\" (UID: \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\") " pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" Jan 27 12:14:46 crc kubenswrapper[4745]: E0127 12:14:46.713620 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:47.213610888 +0000 UTC m=+180.018521576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.716254 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-tf24j" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.814297 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.814581 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85c1cb8-c1a7-4752-b530-c4d21eb39817-config\") pod \"controller-manager-5bdff47b56-cnbn4\" (UID: \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\") " pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.814634 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a85c1cb8-c1a7-4752-b530-c4d21eb39817-proxy-ca-bundles\") pod \"controller-manager-5bdff47b56-cnbn4\" (UID: \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\") " pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.814740 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p4kx\" (UniqueName: \"kubernetes.io/projected/a85c1cb8-c1a7-4752-b530-c4d21eb39817-kube-api-access-6p4kx\") pod \"controller-manager-5bdff47b56-cnbn4\" (UID: \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\") " pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.814879 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a85c1cb8-c1a7-4752-b530-c4d21eb39817-serving-cert\") pod \"controller-manager-5bdff47b56-cnbn4\" (UID: \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\") " pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.814937 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a85c1cb8-c1a7-4752-b530-c4d21eb39817-client-ca\") pod \"controller-manager-5bdff47b56-cnbn4\" (UID: \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\") " pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" Jan 27 12:14:46 crc kubenswrapper[4745]: E0127 12:14:46.815968 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:47.315930539 +0000 UTC m=+180.120841227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.820367 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a85c1cb8-c1a7-4752-b530-c4d21eb39817-proxy-ca-bundles\") pod \"controller-manager-5bdff47b56-cnbn4\" (UID: \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\") " pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.825213 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85c1cb8-c1a7-4752-b530-c4d21eb39817-config\") pod \"controller-manager-5bdff47b56-cnbn4\" (UID: \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\") " pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.842580 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a85c1cb8-c1a7-4752-b530-c4d21eb39817-serving-cert\") pod \"controller-manager-5bdff47b56-cnbn4\" (UID: \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\") " pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.847656 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p4kx\" (UniqueName: \"kubernetes.io/projected/a85c1cb8-c1a7-4752-b530-c4d21eb39817-kube-api-access-6p4kx\") pod \"controller-manager-5bdff47b56-cnbn4\" (UID: \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\") " pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.896860 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a85c1cb8-c1a7-4752-b530-c4d21eb39817-client-ca\") pod \"controller-manager-5bdff47b56-cnbn4\" (UID: \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\") " pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" Jan 27 12:14:46 crc kubenswrapper[4745]: I0127 12:14:46.915918 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:46 crc kubenswrapper[4745]: E0127 12:14:46.916226 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:47.416215262 +0000 UTC m=+180.221125950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.017434 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:47 crc kubenswrapper[4745]: E0127 12:14:47.017509 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:47.517485093 +0000 UTC m=+180.322395781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.017839 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:47 crc kubenswrapper[4745]: E0127 12:14:47.018426 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:47.5184141 +0000 UTC m=+180.323324778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.086781 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.118799 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:47 crc kubenswrapper[4745]: E0127 12:14:47.119428 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:47.619400402 +0000 UTC m=+180.424311100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.220949 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:47 crc kubenswrapper[4745]: E0127 12:14:47.221275 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:47.72126261 +0000 UTC m=+180.526173298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.316407 4745 patch_prober.go:28] interesting pod/console-f9d7485db-zqrwf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.316722 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zqrwf" podUID="94eb6425-bdf2-43d1-926e-c94700a985be" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.326383 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:47 crc kubenswrapper[4745]: E0127 12:14:47.327095 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:47.827067671 +0000 UTC m=+180.631978359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.386056 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bdff47b56-cnbn4"] Jan 27 12:14:47 crc kubenswrapper[4745]: W0127 12:14:47.401012 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda85c1cb8_c1a7_4752_b530_c4d21eb39817.slice/crio-5028a63d3661ba3435a948fe75f527977230729faa80e166e332052c86246301 WatchSource:0}: Error finding container 5028a63d3661ba3435a948fe75f527977230729faa80e166e332052c86246301: Status 404 returned error can't find the container with id 5028a63d3661ba3435a948fe75f527977230729faa80e166e332052c86246301 Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.419827 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:47 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:47 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:47 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.419878 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.427964 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:47 crc kubenswrapper[4745]: E0127 12:14:47.428380 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:47.928351073 +0000 UTC m=+180.733261761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.529119 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:47 crc kubenswrapper[4745]: E0127 12:14:47.529454 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:48.029436968 +0000 UTC m=+180.834347656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.630674 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:47 crc kubenswrapper[4745]: E0127 12:14:47.631380 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:48.131360868 +0000 UTC m=+180.936271556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.634071 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" event={"ID":"a85c1cb8-c1a7-4752-b530-c4d21eb39817","Type":"ContainerStarted","Data":"5028a63d3661ba3435a948fe75f527977230729faa80e166e332052c86246301"} Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.637248 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" event={"ID":"0b5cf703-06c8-4a98-b58b-71543d23affe","Type":"ContainerStarted","Data":"fcf8af660c75dc6089a12fe7efcb9b92d2420d1a5fe8b3e1e9a86615cc706e6e"} Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.639132 4745 generic.go:334] "Generic (PLEG): container finished" podID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerID="92f38c5b389b104b9805b291f8142622c2958d40dec44e8dffe0957115aae9c3" exitCode=0 Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.639226 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hbsbc" event={"ID":"478908d6-765e-4bd8-a3ef-3142a7641a3b","Type":"ContainerDied","Data":"92f38c5b389b104b9805b291f8142622c2958d40dec44e8dffe0957115aae9c3"} Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.732063 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:47 crc kubenswrapper[4745]: E0127 12:14:47.732277 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:48.232230017 +0000 UTC m=+181.037140725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.732414 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:47 crc kubenswrapper[4745]: E0127 12:14:47.733083 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:48.233067671 +0000 UTC m=+181.037978359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.833527 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:47 crc kubenswrapper[4745]: E0127 12:14:47.833753 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:48.333730655 +0000 UTC m=+181.138641353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.833901 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:47 crc kubenswrapper[4745]: E0127 12:14:47.834347 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:48.334322552 +0000 UTC m=+181.139233240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.935490 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:47 crc kubenswrapper[4745]: E0127 12:14:47.935670 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:48.435629684 +0000 UTC m=+181.240540382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:47 crc kubenswrapper[4745]: I0127 12:14:47.936126 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:47 crc kubenswrapper[4745]: E0127 12:14:47.936650 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:48.436601772 +0000 UTC m=+181.241512460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.038064 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:48 crc kubenswrapper[4745]: E0127 12:14:48.038256 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:48.538224683 +0000 UTC m=+181.343135371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.038413 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:48 crc kubenswrapper[4745]: E0127 12:14:48.038880 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:48.53883003 +0000 UTC m=+181.343740718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.139360 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:48 crc kubenswrapper[4745]: E0127 12:14:48.139634 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:48.639596987 +0000 UTC m=+181.444507685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.139987 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:48 crc kubenswrapper[4745]: E0127 12:14:48.140416 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:48.64040118 +0000 UTC m=+181.445311868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.241057 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:48 crc kubenswrapper[4745]: E0127 12:14:48.241216 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:48.741192557 +0000 UTC m=+181.546103245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.241280 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:48 crc kubenswrapper[4745]: E0127 12:14:48.241659 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:48.74164585 +0000 UTC m=+181.546556538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.331649 4745 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.342369 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:48 crc kubenswrapper[4745]: E0127 12:14:48.342613 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 12:14:48.842568531 +0000 UTC m=+181.647479239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.342801 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:48 crc kubenswrapper[4745]: E0127 12:14:48.343138 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 12:14:48.843128037 +0000 UTC m=+181.648038725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hhfbt" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.413083 4745 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T12:14:48.331673067Z","Handler":null,"Name":""} Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.434615 4745 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.434665 4745 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.437243 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:48 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:48 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:48 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.437293 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.455161 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.471379 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.556592 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.672991 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" event={"ID":"a85c1cb8-c1a7-4752-b530-c4d21eb39817","Type":"ContainerStarted","Data":"adefb782983e85e4bc44cb847f322f8c1acb6f2e092dedc3da31c9650ea26193"} Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.673534 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.678006 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" event={"ID":"0b5cf703-06c8-4a98-b58b-71543d23affe","Type":"ContainerStarted","Data":"6227a1691dab9ab8533b408077a8fdff249d159afb1a89d068fbb79014d8da29"} Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.683803 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hbsbc" event={"ID":"478908d6-765e-4bd8-a3ef-3142a7641a3b","Type":"ContainerStarted","Data":"0f74ed4213784ee50c47a4dbcb5e52da9b4eba5f006bb7e4b0f7bd985de43a28"} Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.684443 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hbsbc" Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.685427 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.685497 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.692645 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.700555 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" podStartSLOduration=9.70053877 podStartE2EDuration="9.70053877s" podCreationTimestamp="2026-01-27 12:14:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:48.699422628 +0000 UTC m=+181.504333326" watchObservedRunningTime="2026-01-27 12:14:48.70053877 +0000 UTC m=+181.505449458" Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.705473 4745 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.705524 4745 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.778499 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hhfbt\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.932652 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 12:14:48 crc kubenswrapper[4745]: I0127 12:14:48.940376 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:49 crc kubenswrapper[4745]: I0127 12:14:49.435101 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:49 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:49 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:49 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:49 crc kubenswrapper[4745]: I0127 12:14:49.435488 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:49 crc kubenswrapper[4745]: I0127 12:14:49.655173 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hhfbt"] Jan 27 12:14:49 crc kubenswrapper[4745]: I0127 12:14:49.692574 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:14:49 crc kubenswrapper[4745]: I0127 12:14:49.692645 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:14:50 crc kubenswrapper[4745]: I0127 12:14:50.094647 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 12:14:50 crc kubenswrapper[4745]: I0127 12:14:50.420163 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:50 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:50 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:50 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:50 crc kubenswrapper[4745]: I0127 12:14:50.420236 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:50 crc kubenswrapper[4745]: I0127 12:14:50.704358 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" event={"ID":"0b5cf703-06c8-4a98-b58b-71543d23affe","Type":"ContainerStarted","Data":"cfebe069846926206eab0e2385a13354d431ce2afad4fdc7ec91fdf4e4503d26"} Jan 27 12:14:50 crc kubenswrapper[4745]: I0127 12:14:50.707186 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" event={"ID":"67cab9e2-eb12-495b-a350-8fc0886c1a29","Type":"ContainerStarted","Data":"dcf811754774e4dbe299301694064282617947e44166cc956163eda53be1b36c"} Jan 27 12:14:50 crc kubenswrapper[4745]: I0127 12:14:50.707700 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:14:50 crc kubenswrapper[4745]: I0127 12:14:50.707768 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:14:51 crc kubenswrapper[4745]: I0127 12:14:51.420039 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:51 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:51 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:51 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:51 crc kubenswrapper[4745]: I0127 12:14:51.420317 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:51 crc kubenswrapper[4745]: I0127 12:14:51.717531 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" event={"ID":"67cab9e2-eb12-495b-a350-8fc0886c1a29","Type":"ContainerStarted","Data":"475cfc31343bd26b863b736c1138584df1a440e7a4225414e8b1f52a56c3d700"} Jan 27 12:14:52 crc kubenswrapper[4745]: I0127 12:14:52.422205 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:52 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:52 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:52 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:52 crc kubenswrapper[4745]: I0127 12:14:52.422271 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:53 crc kubenswrapper[4745]: I0127 12:14:53.420343 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:53 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:53 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:53 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:53 crc kubenswrapper[4745]: I0127 12:14:53.420393 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:53 crc kubenswrapper[4745]: I0127 12:14:53.734195 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:14:53 crc kubenswrapper[4745]: I0127 12:14:53.755694 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" podStartSLOduration=163.755649801 podStartE2EDuration="2m43.755649801s" podCreationTimestamp="2026-01-27 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:53.750712319 +0000 UTC m=+186.555623017" watchObservedRunningTime="2026-01-27 12:14:53.755649801 +0000 UTC m=+186.560560489" Jan 27 12:14:53 crc kubenswrapper[4745]: I0127 12:14:53.775584 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-v7vzv" podStartSLOduration=39.775568063 podStartE2EDuration="39.775568063s" podCreationTimestamp="2026-01-27 12:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:14:53.773159944 +0000 UTC m=+186.578070632" watchObservedRunningTime="2026-01-27 12:14:53.775568063 +0000 UTC m=+186.580478741" Jan 27 12:14:54 crc kubenswrapper[4745]: I0127 12:14:54.422648 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:54 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:54 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:54 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:54 crc kubenswrapper[4745]: I0127 12:14:54.422974 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:55 crc kubenswrapper[4745]: I0127 12:14:55.421409 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:55 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:55 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:55 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:55 crc kubenswrapper[4745]: I0127 12:14:55.421471 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:56 crc kubenswrapper[4745]: I0127 12:14:56.227718 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bdff47b56-cnbn4"] Jan 27 12:14:56 crc kubenswrapper[4745]: I0127 12:14:56.228062 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" podUID="a85c1cb8-c1a7-4752-b530-c4d21eb39817" containerName="controller-manager" containerID="cri-o://adefb782983e85e4bc44cb847f322f8c1acb6f2e092dedc3da31c9650ea26193" gracePeriod=30 Jan 27 12:14:56 crc kubenswrapper[4745]: I0127 12:14:56.246703 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8"] Jan 27 12:14:56 crc kubenswrapper[4745]: I0127 12:14:56.246983 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" podUID="6067d6a1-495e-4980-8189-ccdc43df1a37" containerName="route-controller-manager" containerID="cri-o://1f583b0a83e7790c196d2e73af9102881bb3f3d6a420889e1bc360384e322004" gracePeriod=30 Jan 27 12:14:56 crc kubenswrapper[4745]: I0127 12:14:56.419746 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:56 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:56 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:56 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:56 crc kubenswrapper[4745]: I0127 12:14:56.419829 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:56 crc kubenswrapper[4745]: I0127 12:14:56.523160 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:14:56 crc kubenswrapper[4745]: I0127 12:14:56.523219 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:14:56 crc kubenswrapper[4745]: I0127 12:14:56.524400 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:14:56 crc kubenswrapper[4745]: I0127 12:14:56.524456 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:14:57 crc kubenswrapper[4745]: I0127 12:14:57.089021 4745 patch_prober.go:28] interesting pod/controller-manager-5bdff47b56-cnbn4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Jan 27 12:14:57 crc kubenswrapper[4745]: I0127 12:14:57.089142 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" podUID="a85c1cb8-c1a7-4752-b530-c4d21eb39817" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Jan 27 12:14:57 crc kubenswrapper[4745]: I0127 12:14:57.317714 4745 patch_prober.go:28] interesting pod/console-f9d7485db-zqrwf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 27 12:14:57 crc kubenswrapper[4745]: I0127 12:14:57.317794 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zqrwf" podUID="94eb6425-bdf2-43d1-926e-c94700a985be" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 27 12:14:57 crc kubenswrapper[4745]: I0127 12:14:57.420501 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:57 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:57 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:57 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:57 crc kubenswrapper[4745]: I0127 12:14:57.420598 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:57 crc kubenswrapper[4745]: I0127 12:14:57.493611 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5mbc7" Jan 27 12:14:58 crc kubenswrapper[4745]: I0127 12:14:58.420444 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:58 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:58 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:58 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:58 crc kubenswrapper[4745]: I0127 12:14:58.420503 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:14:58 crc kubenswrapper[4745]: I0127 12:14:58.775160 4745 generic.go:334] "Generic (PLEG): container finished" podID="6067d6a1-495e-4980-8189-ccdc43df1a37" containerID="1f583b0a83e7790c196d2e73af9102881bb3f3d6a420889e1bc360384e322004" exitCode=0 Jan 27 12:14:58 crc kubenswrapper[4745]: I0127 12:14:58.775244 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" event={"ID":"6067d6a1-495e-4980-8189-ccdc43df1a37","Type":"ContainerDied","Data":"1f583b0a83e7790c196d2e73af9102881bb3f3d6a420889e1bc360384e322004"} Jan 27 12:14:58 crc kubenswrapper[4745]: I0127 12:14:58.777229 4745 generic.go:334] "Generic (PLEG): container finished" podID="a85c1cb8-c1a7-4752-b530-c4d21eb39817" containerID="adefb782983e85e4bc44cb847f322f8c1acb6f2e092dedc3da31c9650ea26193" exitCode=0 Jan 27 12:14:58 crc kubenswrapper[4745]: I0127 12:14:58.777255 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" event={"ID":"a85c1cb8-c1a7-4752-b530-c4d21eb39817","Type":"ContainerDied","Data":"adefb782983e85e4bc44cb847f322f8c1acb6f2e092dedc3da31c9650ea26193"} Jan 27 12:14:59 crc kubenswrapper[4745]: I0127 12:14:59.342090 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 12:14:59 crc kubenswrapper[4745]: I0127 12:14:59.421325 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:14:59 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:14:59 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:14:59 crc kubenswrapper[4745]: healthz check failed Jan 27 12:14:59 crc kubenswrapper[4745]: I0127 12:14:59.421394 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:15:00 crc kubenswrapper[4745]: I0127 12:15:00.128713 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8"] Jan 27 12:15:00 crc kubenswrapper[4745]: I0127 12:15:00.130101 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8" Jan 27 12:15:00 crc kubenswrapper[4745]: I0127 12:15:00.135855 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 12:15:00 crc kubenswrapper[4745]: I0127 12:15:00.136198 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 12:15:00 crc kubenswrapper[4745]: I0127 12:15:00.138447 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8"] Jan 27 12:15:00 crc kubenswrapper[4745]: I0127 12:15:00.212319 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe4ab457-bf86-43e0-898e-d7d1b5965142-secret-volume\") pod \"collect-profiles-29491935-z6lc8\" (UID: \"fe4ab457-bf86-43e0-898e-d7d1b5965142\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8" Jan 27 12:15:00 crc kubenswrapper[4745]: I0127 12:15:00.212397 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe4ab457-bf86-43e0-898e-d7d1b5965142-config-volume\") pod \"collect-profiles-29491935-z6lc8\" (UID: \"fe4ab457-bf86-43e0-898e-d7d1b5965142\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8" Jan 27 12:15:00 crc kubenswrapper[4745]: I0127 12:15:00.212456 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccxfm\" (UniqueName: \"kubernetes.io/projected/fe4ab457-bf86-43e0-898e-d7d1b5965142-kube-api-access-ccxfm\") pod \"collect-profiles-29491935-z6lc8\" (UID: \"fe4ab457-bf86-43e0-898e-d7d1b5965142\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8" Jan 27 12:15:00 crc kubenswrapper[4745]: I0127 12:15:00.313973 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe4ab457-bf86-43e0-898e-d7d1b5965142-secret-volume\") pod \"collect-profiles-29491935-z6lc8\" (UID: \"fe4ab457-bf86-43e0-898e-d7d1b5965142\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8" Jan 27 12:15:00 crc kubenswrapper[4745]: I0127 12:15:00.314041 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe4ab457-bf86-43e0-898e-d7d1b5965142-config-volume\") pod \"collect-profiles-29491935-z6lc8\" (UID: \"fe4ab457-bf86-43e0-898e-d7d1b5965142\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8" Jan 27 12:15:00 crc kubenswrapper[4745]: I0127 12:15:00.314105 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccxfm\" (UniqueName: \"kubernetes.io/projected/fe4ab457-bf86-43e0-898e-d7d1b5965142-kube-api-access-ccxfm\") pod \"collect-profiles-29491935-z6lc8\" (UID: \"fe4ab457-bf86-43e0-898e-d7d1b5965142\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8" Jan 27 12:15:00 crc kubenswrapper[4745]: I0127 12:15:00.318645 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe4ab457-bf86-43e0-898e-d7d1b5965142-config-volume\") pod \"collect-profiles-29491935-z6lc8\" (UID: \"fe4ab457-bf86-43e0-898e-d7d1b5965142\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8" Jan 27 12:15:00 crc kubenswrapper[4745]: I0127 12:15:00.327704 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe4ab457-bf86-43e0-898e-d7d1b5965142-secret-volume\") pod \"collect-profiles-29491935-z6lc8\" (UID: \"fe4ab457-bf86-43e0-898e-d7d1b5965142\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8" Jan 27 12:15:00 crc kubenswrapper[4745]: I0127 12:15:00.330545 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccxfm\" (UniqueName: \"kubernetes.io/projected/fe4ab457-bf86-43e0-898e-d7d1b5965142-kube-api-access-ccxfm\") pod \"collect-profiles-29491935-z6lc8\" (UID: \"fe4ab457-bf86-43e0-898e-d7d1b5965142\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8" Jan 27 12:15:00 crc kubenswrapper[4745]: I0127 12:15:00.420724 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:15:00 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:15:00 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:15:00 crc kubenswrapper[4745]: healthz check failed Jan 27 12:15:00 crc kubenswrapper[4745]: I0127 12:15:00.420780 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:15:00 crc kubenswrapper[4745]: I0127 12:15:00.451112 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8" Jan 27 12:15:01 crc kubenswrapper[4745]: I0127 12:15:01.420367 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:15:01 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:15:01 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:15:01 crc kubenswrapper[4745]: healthz check failed Jan 27 12:15:01 crc kubenswrapper[4745]: I0127 12:15:01.420427 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:15:02 crc kubenswrapper[4745]: I0127 12:15:02.420685 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:15:02 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:15:02 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:15:02 crc kubenswrapper[4745]: healthz check failed Jan 27 12:15:02 crc kubenswrapper[4745]: I0127 12:15:02.420763 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:15:03 crc kubenswrapper[4745]: I0127 12:15:03.419727 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:15:03 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 27 12:15:03 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:15:03 crc kubenswrapper[4745]: healthz check failed Jan 27 12:15:03 crc kubenswrapper[4745]: I0127 12:15:03.419792 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:15:04 crc kubenswrapper[4745]: I0127 12:15:04.110484 4745 patch_prober.go:28] interesting pod/route-controller-manager-548d5b954d-pxxs8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Jan 27 12:15:04 crc kubenswrapper[4745]: I0127 12:15:04.110788 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" podUID="6067d6a1-495e-4980-8189-ccdc43df1a37" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Jan 27 12:15:04 crc kubenswrapper[4745]: I0127 12:15:04.420019 4745 patch_prober.go:28] interesting pod/router-default-5444994796-5mbhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 12:15:04 crc kubenswrapper[4745]: [+]has-synced ok Jan 27 12:15:04 crc kubenswrapper[4745]: [+]process-running ok Jan 27 12:15:04 crc kubenswrapper[4745]: healthz check failed Jan 27 12:15:04 crc kubenswrapper[4745]: I0127 12:15:04.420064 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5mbhc" podUID="71fd83ec-fa99-4caa-a216-1f1bb2be9251" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 12:15:05 crc kubenswrapper[4745]: I0127 12:15:05.420765 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5mbhc" Jan 27 12:15:05 crc kubenswrapper[4745]: I0127 12:15:05.422583 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5mbhc" Jan 27 12:15:05 crc kubenswrapper[4745]: I0127 12:15:05.967399 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:15:05 crc kubenswrapper[4745]: I0127 12:15:05.967481 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:15:06 crc kubenswrapper[4745]: I0127 12:15:06.521504 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:15:06 crc kubenswrapper[4745]: I0127 12:15:06.521587 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:15:06 crc kubenswrapper[4745]: I0127 12:15:06.521614 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:15:06 crc kubenswrapper[4745]: I0127 12:15:06.521661 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:15:07 crc kubenswrapper[4745]: I0127 12:15:07.088248 4745 patch_prober.go:28] interesting pod/controller-manager-5bdff47b56-cnbn4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Jan 27 12:15:07 crc kubenswrapper[4745]: I0127 12:15:07.088330 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" podUID="a85c1cb8-c1a7-4752-b530-c4d21eb39817" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Jan 27 12:15:07 crc kubenswrapper[4745]: I0127 12:15:07.316838 4745 patch_prober.go:28] interesting pod/console-f9d7485db-zqrwf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 27 12:15:07 crc kubenswrapper[4745]: I0127 12:15:07.317147 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zqrwf" podUID="94eb6425-bdf2-43d1-926e-c94700a985be" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 27 12:15:08 crc kubenswrapper[4745]: I0127 12:15:08.948613 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:15:14 crc kubenswrapper[4745]: I0127 12:15:14.111434 4745 patch_prober.go:28] interesting pod/route-controller-manager-548d5b954d-pxxs8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Jan 27 12:15:14 crc kubenswrapper[4745]: I0127 12:15:14.111860 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" podUID="6067d6a1-495e-4980-8189-ccdc43df1a37" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Jan 27 12:15:16 crc kubenswrapper[4745]: I0127 12:15:16.521926 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:15:16 crc kubenswrapper[4745]: I0127 12:15:16.521984 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:15:16 crc kubenswrapper[4745]: I0127 12:15:16.522027 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-hbsbc" Jan 27 12:15:16 crc kubenswrapper[4745]: I0127 12:15:16.521938 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:15:16 crc kubenswrapper[4745]: I0127 12:15:16.522072 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:15:16 crc kubenswrapper[4745]: I0127 12:15:16.522748 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"0f74ed4213784ee50c47a4dbcb5e52da9b4eba5f006bb7e4b0f7bd985de43a28"} pod="openshift-console/downloads-7954f5f757-hbsbc" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 27 12:15:16 crc kubenswrapper[4745]: I0127 12:15:16.522796 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" containerID="cri-o://0f74ed4213784ee50c47a4dbcb5e52da9b4eba5f006bb7e4b0f7bd985de43a28" gracePeriod=2 Jan 27 12:15:16 crc kubenswrapper[4745]: I0127 12:15:16.522908 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:15:16 crc kubenswrapper[4745]: I0127 12:15:16.522936 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.328031 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.337415 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.374421 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.375300 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.377444 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.378029 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.381621 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.461026 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2dce2e2-534d-417d-a4f7-d945631a53b4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b2dce2e2-534d-417d-a4f7-d945631a53b4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.461072 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2dce2e2-534d-417d-a4f7-d945631a53b4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b2dce2e2-534d-417d-a4f7-d945631a53b4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.562040 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2dce2e2-534d-417d-a4f7-d945631a53b4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b2dce2e2-534d-417d-a4f7-d945631a53b4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.562090 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2dce2e2-534d-417d-a4f7-d945631a53b4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b2dce2e2-534d-417d-a4f7-d945631a53b4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.562194 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2dce2e2-534d-417d-a4f7-d945631a53b4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b2dce2e2-534d-417d-a4f7-d945631a53b4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.584515 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2dce2e2-534d-417d-a4f7-d945631a53b4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b2dce2e2-534d-417d-a4f7-d945631a53b4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.704494 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.897505 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" event={"ID":"a85c1cb8-c1a7-4752-b530-c4d21eb39817","Type":"ContainerDied","Data":"5028a63d3661ba3435a948fe75f527977230729faa80e166e332052c86246301"} Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.897548 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5028a63d3661ba3435a948fe75f527977230729faa80e166e332052c86246301" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.899550 4745 generic.go:334] "Generic (PLEG): container finished" podID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerID="0f74ed4213784ee50c47a4dbcb5e52da9b4eba5f006bb7e4b0f7bd985de43a28" exitCode=0 Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.899671 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hbsbc" event={"ID":"478908d6-765e-4bd8-a3ef-3142a7641a3b","Type":"ContainerDied","Data":"0f74ed4213784ee50c47a4dbcb5e52da9b4eba5f006bb7e4b0f7bd985de43a28"} Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.899773 4745 scope.go:117] "RemoveContainer" containerID="92f38c5b389b104b9805b291f8142622c2958d40dec44e8dffe0957115aae9c3" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.906637 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.973438 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a85c1cb8-c1a7-4752-b530-c4d21eb39817-proxy-ca-bundles\") pod \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\" (UID: \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\") " Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.973491 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p4kx\" (UniqueName: \"kubernetes.io/projected/a85c1cb8-c1a7-4752-b530-c4d21eb39817-kube-api-access-6p4kx\") pod \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\" (UID: \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\") " Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.974242 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a85c1cb8-c1a7-4752-b530-c4d21eb39817-serving-cert\") pod \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\" (UID: \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\") " Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.974351 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85c1cb8-c1a7-4752-b530-c4d21eb39817-config\") pod \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\" (UID: \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\") " Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.974438 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a85c1cb8-c1a7-4752-b530-c4d21eb39817-client-ca\") pod \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\" (UID: \"a85c1cb8-c1a7-4752-b530-c4d21eb39817\") " Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.974515 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a85c1cb8-c1a7-4752-b530-c4d21eb39817-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a85c1cb8-c1a7-4752-b530-c4d21eb39817" (UID: "a85c1cb8-c1a7-4752-b530-c4d21eb39817"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.974890 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a85c1cb8-c1a7-4752-b530-c4d21eb39817-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.975443 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a85c1cb8-c1a7-4752-b530-c4d21eb39817-client-ca" (OuterVolumeSpecName: "client-ca") pod "a85c1cb8-c1a7-4752-b530-c4d21eb39817" (UID: "a85c1cb8-c1a7-4752-b530-c4d21eb39817"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.976351 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a85c1cb8-c1a7-4752-b530-c4d21eb39817-config" (OuterVolumeSpecName: "config") pod "a85c1cb8-c1a7-4752-b530-c4d21eb39817" (UID: "a85c1cb8-c1a7-4752-b530-c4d21eb39817"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.977977 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a85c1cb8-c1a7-4752-b530-c4d21eb39817-kube-api-access-6p4kx" (OuterVolumeSpecName: "kube-api-access-6p4kx") pod "a85c1cb8-c1a7-4752-b530-c4d21eb39817" (UID: "a85c1cb8-c1a7-4752-b530-c4d21eb39817"). InnerVolumeSpecName "kube-api-access-6p4kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.980273 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a85c1cb8-c1a7-4752-b530-c4d21eb39817-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a85c1cb8-c1a7-4752-b530-c4d21eb39817" (UID: "a85c1cb8-c1a7-4752-b530-c4d21eb39817"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.981233 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-66df77f6dd-8xdt6"] Jan 27 12:15:17 crc kubenswrapper[4745]: E0127 12:15:17.981437 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a85c1cb8-c1a7-4752-b530-c4d21eb39817" containerName="controller-manager" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.981453 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a85c1cb8-c1a7-4752-b530-c4d21eb39817" containerName="controller-manager" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.981536 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="a85c1cb8-c1a7-4752-b530-c4d21eb39817" containerName="controller-manager" Jan 27 12:15:17 crc kubenswrapper[4745]: I0127 12:15:17.982122 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.000353 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66df77f6dd-8xdt6"] Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.075674 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-config\") pod \"controller-manager-66df77f6dd-8xdt6\" (UID: \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\") " pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.075717 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv5w6\" (UniqueName: \"kubernetes.io/projected/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-kube-api-access-fv5w6\") pod \"controller-manager-66df77f6dd-8xdt6\" (UID: \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\") " pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.075737 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-proxy-ca-bundles\") pod \"controller-manager-66df77f6dd-8xdt6\" (UID: \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\") " pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.075768 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-serving-cert\") pod \"controller-manager-66df77f6dd-8xdt6\" (UID: \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\") " pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.075835 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-client-ca\") pod \"controller-manager-66df77f6dd-8xdt6\" (UID: \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\") " pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.075884 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p4kx\" (UniqueName: \"kubernetes.io/projected/a85c1cb8-c1a7-4752-b530-c4d21eb39817-kube-api-access-6p4kx\") on node \"crc\" DevicePath \"\"" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.075894 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a85c1cb8-c1a7-4752-b530-c4d21eb39817-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.075903 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85c1cb8-c1a7-4752-b530-c4d21eb39817-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.075911 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a85c1cb8-c1a7-4752-b530-c4d21eb39817-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.088292 4745 patch_prober.go:28] interesting pod/controller-manager-5bdff47b56-cnbn4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.088374 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" podUID="a85c1cb8-c1a7-4752-b530-c4d21eb39817" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.177725 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-config\") pod \"controller-manager-66df77f6dd-8xdt6\" (UID: \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\") " pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.177856 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv5w6\" (UniqueName: \"kubernetes.io/projected/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-kube-api-access-fv5w6\") pod \"controller-manager-66df77f6dd-8xdt6\" (UID: \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\") " pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.177897 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-proxy-ca-bundles\") pod \"controller-manager-66df77f6dd-8xdt6\" (UID: \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\") " pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.177950 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-serving-cert\") pod \"controller-manager-66df77f6dd-8xdt6\" (UID: \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\") " pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.177998 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-client-ca\") pod \"controller-manager-66df77f6dd-8xdt6\" (UID: \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\") " pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.179377 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-client-ca\") pod \"controller-manager-66df77f6dd-8xdt6\" (UID: \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\") " pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.179547 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-config\") pod \"controller-manager-66df77f6dd-8xdt6\" (UID: \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\") " pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.180996 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-proxy-ca-bundles\") pod \"controller-manager-66df77f6dd-8xdt6\" (UID: \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\") " pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.181767 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-serving-cert\") pod \"controller-manager-66df77f6dd-8xdt6\" (UID: \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\") " pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.195265 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv5w6\" (UniqueName: \"kubernetes.io/projected/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-kube-api-access-fv5w6\") pod \"controller-manager-66df77f6dd-8xdt6\" (UID: \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\") " pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.335781 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.915459 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bdff47b56-cnbn4" Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.946205 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bdff47b56-cnbn4"] Jan 27 12:15:18 crc kubenswrapper[4745]: I0127 12:15:18.951061 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5bdff47b56-cnbn4"] Jan 27 12:15:20 crc kubenswrapper[4745]: I0127 12:15:20.080885 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a85c1cb8-c1a7-4752-b530-c4d21eb39817" path="/var/lib/kubelet/pods/a85c1cb8-c1a7-4752-b530-c4d21eb39817/volumes" Jan 27 12:15:20 crc kubenswrapper[4745]: E0127 12:15:20.944353 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage3007126922/2\": happened during read: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 12:15:20 crc kubenswrapper[4745]: E0127 12:15:20.944696 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vkw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2c9wm_openshift-marketplace(3fcec544-9ef8-406d-9f01-b3ceabf2b033): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage3007126922/2\": happened during read: context canceled" logger="UnhandledError" Jan 27 12:15:20 crc kubenswrapper[4745]: E0127 12:15:20.946112 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage3007126922/2\\\": happened during read: context canceled\"" pod="openshift-marketplace/redhat-operators-2c9wm" podUID="3fcec544-9ef8-406d-9f01-b3ceabf2b033" Jan 27 12:15:21 crc kubenswrapper[4745]: I0127 12:15:21.572734 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 12:15:21 crc kubenswrapper[4745]: I0127 12:15:21.576133 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 12:15:21 crc kubenswrapper[4745]: I0127 12:15:21.605980 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 12:15:21 crc kubenswrapper[4745]: I0127 12:15:21.621976 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1170ea3-dafb-4e36-b873-bc1e339b86b9-kube-api-access\") pod \"installer-9-crc\" (UID: \"d1170ea3-dafb-4e36-b873-bc1e339b86b9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 12:15:21 crc kubenswrapper[4745]: I0127 12:15:21.622063 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1170ea3-dafb-4e36-b873-bc1e339b86b9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d1170ea3-dafb-4e36-b873-bc1e339b86b9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 12:15:21 crc kubenswrapper[4745]: I0127 12:15:21.622107 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d1170ea3-dafb-4e36-b873-bc1e339b86b9-var-lock\") pod \"installer-9-crc\" (UID: \"d1170ea3-dafb-4e36-b873-bc1e339b86b9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 12:15:21 crc kubenswrapper[4745]: I0127 12:15:21.723141 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1170ea3-dafb-4e36-b873-bc1e339b86b9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d1170ea3-dafb-4e36-b873-bc1e339b86b9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 12:15:21 crc kubenswrapper[4745]: I0127 12:15:21.723190 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d1170ea3-dafb-4e36-b873-bc1e339b86b9-var-lock\") pod \"installer-9-crc\" (UID: \"d1170ea3-dafb-4e36-b873-bc1e339b86b9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 12:15:21 crc kubenswrapper[4745]: I0127 12:15:21.723210 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1170ea3-dafb-4e36-b873-bc1e339b86b9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d1170ea3-dafb-4e36-b873-bc1e339b86b9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 12:15:21 crc kubenswrapper[4745]: I0127 12:15:21.723232 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1170ea3-dafb-4e36-b873-bc1e339b86b9-kube-api-access\") pod \"installer-9-crc\" (UID: \"d1170ea3-dafb-4e36-b873-bc1e339b86b9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 12:15:21 crc kubenswrapper[4745]: I0127 12:15:21.723264 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d1170ea3-dafb-4e36-b873-bc1e339b86b9-var-lock\") pod \"installer-9-crc\" (UID: \"d1170ea3-dafb-4e36-b873-bc1e339b86b9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 12:15:21 crc kubenswrapper[4745]: I0127 12:15:21.742009 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1170ea3-dafb-4e36-b873-bc1e339b86b9-kube-api-access\") pod \"installer-9-crc\" (UID: \"d1170ea3-dafb-4e36-b873-bc1e339b86b9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 12:15:21 crc kubenswrapper[4745]: I0127 12:15:21.908171 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 12:15:25 crc kubenswrapper[4745]: I0127 12:15:25.111686 4745 patch_prober.go:28] interesting pod/route-controller-manager-548d5b954d-pxxs8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 12:15:25 crc kubenswrapper[4745]: I0127 12:15:25.111994 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" podUID="6067d6a1-495e-4980-8189-ccdc43df1a37" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 12:15:26 crc kubenswrapper[4745]: I0127 12:15:26.521634 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:15:26 crc kubenswrapper[4745]: I0127 12:15:26.521729 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:15:28 crc kubenswrapper[4745]: E0127 12:15:28.282468 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 12:15:28 crc kubenswrapper[4745]: E0127 12:15:28.282948 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t9vlg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-zhnbq_openshift-marketplace(d2b41701-5113-4970-8d93-157bf16b3c06): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 12:15:28 crc kubenswrapper[4745]: E0127 12:15:28.285008 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-zhnbq" podUID="d2b41701-5113-4970-8d93-157bf16b3c06" Jan 27 12:15:35 crc kubenswrapper[4745]: I0127 12:15:35.111741 4745 patch_prober.go:28] interesting pod/route-controller-manager-548d5b954d-pxxs8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 12:15:35 crc kubenswrapper[4745]: I0127 12:15:35.112115 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" podUID="6067d6a1-495e-4980-8189-ccdc43df1a37" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 12:15:35 crc kubenswrapper[4745]: I0127 12:15:35.967151 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:15:35 crc kubenswrapper[4745]: I0127 12:15:35.967244 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:15:35 crc kubenswrapper[4745]: I0127 12:15:35.967304 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:15:35 crc kubenswrapper[4745]: I0127 12:15:35.968048 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865"} pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 12:15:35 crc kubenswrapper[4745]: I0127 12:15:35.968120 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" containerID="cri-o://d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865" gracePeriod=600 Jan 27 12:15:36 crc kubenswrapper[4745]: I0127 12:15:36.521115 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:15:36 crc kubenswrapper[4745]: I0127 12:15:36.521440 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:15:40 crc kubenswrapper[4745]: I0127 12:15:40.029570 4745 generic.go:334] "Generic (PLEG): container finished" podID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerID="d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865" exitCode=0 Jan 27 12:15:40 crc kubenswrapper[4745]: I0127 12:15:40.029652 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerDied","Data":"d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865"} Jan 27 12:15:42 crc kubenswrapper[4745]: E0127 12:15:42.010202 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 12:15:42 crc kubenswrapper[4745]: E0127 12:15:42.010519 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dw9nh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fzw5q_openshift-marketplace(36154dea-ca68-4ca6-8e2f-83a669152ca7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 12:15:42 crc kubenswrapper[4745]: E0127 12:15:42.012449 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-fzw5q" podUID="36154dea-ca68-4ca6-8e2f-83a669152ca7" Jan 27 12:15:45 crc kubenswrapper[4745]: I0127 12:15:45.110997 4745 patch_prober.go:28] interesting pod/route-controller-manager-548d5b954d-pxxs8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 12:15:45 crc kubenswrapper[4745]: I0127 12:15:45.111079 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" podUID="6067d6a1-495e-4980-8189-ccdc43df1a37" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 12:15:46 crc kubenswrapper[4745]: I0127 12:15:46.520841 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:15:46 crc kubenswrapper[4745]: I0127 12:15:46.520903 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:15:52 crc kubenswrapper[4745]: E0127 12:15:52.204471 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 12:15:52 crc kubenswrapper[4745]: E0127 12:15:52.205076 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfrhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-hw272_openshift-marketplace(6d114857-b077-4798-b578-b9a15645d31f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 12:15:52 crc kubenswrapper[4745]: E0127 12:15:52.206283 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-hw272" podUID="6d114857-b077-4798-b578-b9a15645d31f" Jan 27 12:15:54 crc kubenswrapper[4745]: E0127 12:15:54.291716 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 12:15:54 crc kubenswrapper[4745]: E0127 12:15:54.292232 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ccqcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tzw6b_openshift-marketplace(341a8942-834f-4f76-8269-7ecdecaaa1b0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 12:15:54 crc kubenswrapper[4745]: E0127 12:15:54.293415 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tzw6b" podUID="341a8942-834f-4f76-8269-7ecdecaaa1b0" Jan 27 12:15:55 crc kubenswrapper[4745]: I0127 12:15:55.110291 4745 patch_prober.go:28] interesting pod/route-controller-manager-548d5b954d-pxxs8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 12:15:55 crc kubenswrapper[4745]: I0127 12:15:55.110357 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" podUID="6067d6a1-495e-4980-8189-ccdc43df1a37" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 12:15:55 crc kubenswrapper[4745]: E0127 12:15:55.923534 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 12:15:55 crc kubenswrapper[4745]: E0127 12:15:55.923725 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lnv2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-255wr_openshift-marketplace(64c43381-42e2-4e01-9559-70c3c56070ea): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 12:15:55 crc kubenswrapper[4745]: E0127 12:15:55.924940 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-255wr" podUID="64c43381-42e2-4e01-9559-70c3c56070ea" Jan 27 12:15:56 crc kubenswrapper[4745]: I0127 12:15:56.522417 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:15:56 crc kubenswrapper[4745]: I0127 12:15:56.523921 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:15:57 crc kubenswrapper[4745]: E0127 12:15:57.435599 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 12:15:57 crc kubenswrapper[4745]: E0127 12:15:57.435770 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxf9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bmx2n_openshift-marketplace(7c6f4dda-1294-4903-a4c1-6685307c3b25): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 12:15:57 crc kubenswrapper[4745]: E0127 12:15:57.436940 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-bmx2n" podUID="7c6f4dda-1294-4903-a4c1-6685307c3b25" Jan 27 12:16:05 crc kubenswrapper[4745]: I0127 12:16:05.110849 4745 patch_prober.go:28] interesting pod/route-controller-manager-548d5b954d-pxxs8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 12:16:05 crc kubenswrapper[4745]: I0127 12:16:05.111375 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" podUID="6067d6a1-495e-4980-8189-ccdc43df1a37" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 12:16:06 crc kubenswrapper[4745]: I0127 12:16:06.521733 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:16:06 crc kubenswrapper[4745]: I0127 12:16:06.521922 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:16:08 crc kubenswrapper[4745]: E0127 12:16:08.449472 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 12:16:08 crc kubenswrapper[4745]: E0127 12:16:08.449668 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t9vlg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-zhnbq_openshift-marketplace(d2b41701-5113-4970-8d93-157bf16b3c06): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 12:16:08 crc kubenswrapper[4745]: E0127 12:16:08.452443 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-zhnbq" podUID="d2b41701-5113-4970-8d93-157bf16b3c06" Jan 27 12:16:08 crc kubenswrapper[4745]: E0127 12:16:08.475663 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 12:16:08 crc kubenswrapper[4745]: E0127 12:16:08.475920 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vkw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2c9wm_openshift-marketplace(3fcec544-9ef8-406d-9f01-b3ceabf2b033): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 12:16:08 crc kubenswrapper[4745]: E0127 12:16:08.477087 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-2c9wm" podUID="3fcec544-9ef8-406d-9f01-b3ceabf2b033" Jan 27 12:16:08 crc kubenswrapper[4745]: E0127 12:16:08.501704 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 12:16:08 crc kubenswrapper[4745]: E0127 12:16:08.501892 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-224b6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9tkgm_openshift-marketplace(7ff89667-3b76-4571-a07b-d43bce0a2e5b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 12:16:08 crc kubenswrapper[4745]: E0127 12:16:08.503077 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-9tkgm" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.556372 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.557288 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6067d6a1-495e-4980-8189-ccdc43df1a37-client-ca\") pod \"6067d6a1-495e-4980-8189-ccdc43df1a37\" (UID: \"6067d6a1-495e-4980-8189-ccdc43df1a37\") " Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.557330 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wsqh\" (UniqueName: \"kubernetes.io/projected/6067d6a1-495e-4980-8189-ccdc43df1a37-kube-api-access-5wsqh\") pod \"6067d6a1-495e-4980-8189-ccdc43df1a37\" (UID: \"6067d6a1-495e-4980-8189-ccdc43df1a37\") " Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.557379 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6067d6a1-495e-4980-8189-ccdc43df1a37-serving-cert\") pod \"6067d6a1-495e-4980-8189-ccdc43df1a37\" (UID: \"6067d6a1-495e-4980-8189-ccdc43df1a37\") " Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.557407 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6067d6a1-495e-4980-8189-ccdc43df1a37-config\") pod \"6067d6a1-495e-4980-8189-ccdc43df1a37\" (UID: \"6067d6a1-495e-4980-8189-ccdc43df1a37\") " Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.558707 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6067d6a1-495e-4980-8189-ccdc43df1a37-config" (OuterVolumeSpecName: "config") pod "6067d6a1-495e-4980-8189-ccdc43df1a37" (UID: "6067d6a1-495e-4980-8189-ccdc43df1a37"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.559157 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6067d6a1-495e-4980-8189-ccdc43df1a37-client-ca" (OuterVolumeSpecName: "client-ca") pod "6067d6a1-495e-4980-8189-ccdc43df1a37" (UID: "6067d6a1-495e-4980-8189-ccdc43df1a37"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.592972 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6067d6a1-495e-4980-8189-ccdc43df1a37-kube-api-access-5wsqh" (OuterVolumeSpecName: "kube-api-access-5wsqh") pod "6067d6a1-495e-4980-8189-ccdc43df1a37" (UID: "6067d6a1-495e-4980-8189-ccdc43df1a37"). InnerVolumeSpecName "kube-api-access-5wsqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.597028 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6067d6a1-495e-4980-8189-ccdc43df1a37-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6067d6a1-495e-4980-8189-ccdc43df1a37" (UID: "6067d6a1-495e-4980-8189-ccdc43df1a37"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.613863 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p"] Jan 27 12:16:08 crc kubenswrapper[4745]: E0127 12:16:08.614158 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6067d6a1-495e-4980-8189-ccdc43df1a37" containerName="route-controller-manager" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.614173 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="6067d6a1-495e-4980-8189-ccdc43df1a37" containerName="route-controller-manager" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.614291 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="6067d6a1-495e-4980-8189-ccdc43df1a37" containerName="route-controller-manager" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.614746 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.632356 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p"] Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.659771 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6067d6a1-495e-4980-8189-ccdc43df1a37-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.659904 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wsqh\" (UniqueName: \"kubernetes.io/projected/6067d6a1-495e-4980-8189-ccdc43df1a37-kube-api-access-5wsqh\") on node \"crc\" DevicePath \"\"" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.659958 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6067d6a1-495e-4980-8189-ccdc43df1a37-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.660006 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6067d6a1-495e-4980-8189-ccdc43df1a37-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.740764 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8"] Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.761639 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjvrd\" (UniqueName: \"kubernetes.io/projected/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-kube-api-access-vjvrd\") pod \"route-controller-manager-69d7f49f5-mfq2p\" (UID: \"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9\") " pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.761708 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-serving-cert\") pod \"route-controller-manager-69d7f49f5-mfq2p\" (UID: \"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9\") " pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.761761 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-config\") pod \"route-controller-manager-69d7f49f5-mfq2p\" (UID: \"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9\") " pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.761791 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-client-ca\") pod \"route-controller-manager-69d7f49f5-mfq2p\" (UID: \"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9\") " pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.862710 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjvrd\" (UniqueName: \"kubernetes.io/projected/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-kube-api-access-vjvrd\") pod \"route-controller-manager-69d7f49f5-mfq2p\" (UID: \"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9\") " pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.862768 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-serving-cert\") pod \"route-controller-manager-69d7f49f5-mfq2p\" (UID: \"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9\") " pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.862836 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-config\") pod \"route-controller-manager-69d7f49f5-mfq2p\" (UID: \"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9\") " pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.862871 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-client-ca\") pod \"route-controller-manager-69d7f49f5-mfq2p\" (UID: \"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9\") " pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.863931 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-client-ca\") pod \"route-controller-manager-69d7f49f5-mfq2p\" (UID: \"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9\") " pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.864063 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-config\") pod \"route-controller-manager-69d7f49f5-mfq2p\" (UID: \"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9\") " pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.881429 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-serving-cert\") pod \"route-controller-manager-69d7f49f5-mfq2p\" (UID: \"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9\") " pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.883340 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjvrd\" (UniqueName: \"kubernetes.io/projected/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-kube-api-access-vjvrd\") pod \"route-controller-manager-69d7f49f5-mfq2p\" (UID: \"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9\") " pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.991850 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 12:16:08 crc kubenswrapper[4745]: I0127 12:16:08.995309 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" Jan 27 12:16:08 crc kubenswrapper[4745]: W0127 12:16:08.998413 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb2dce2e2_534d_417d_a4f7_d945631a53b4.slice/crio-5bbe6cead109d6aafd4aaf51ba06d593b27061868d12ce039283ca63dbea8405 WatchSource:0}: Error finding container 5bbe6cead109d6aafd4aaf51ba06d593b27061868d12ce039283ca63dbea8405: Status 404 returned error can't find the container with id 5bbe6cead109d6aafd4aaf51ba06d593b27061868d12ce039283ca63dbea8405 Jan 27 12:16:09 crc kubenswrapper[4745]: I0127 12:16:09.004090 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66df77f6dd-8xdt6"] Jan 27 12:16:09 crc kubenswrapper[4745]: I0127 12:16:09.008553 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 12:16:09 crc kubenswrapper[4745]: I0127 12:16:09.203696 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" event={"ID":"6067d6a1-495e-4980-8189-ccdc43df1a37","Type":"ContainerDied","Data":"e4060e9e94cd30f5b52cbd5d39cb28c4f22eb026846b8da47553e8a84e5098f1"} Jan 27 12:16:09 crc kubenswrapper[4745]: I0127 12:16:09.204127 4745 scope.go:117] "RemoveContainer" containerID="1f583b0a83e7790c196d2e73af9102881bb3f3d6a420889e1bc360384e322004" Jan 27 12:16:09 crc kubenswrapper[4745]: I0127 12:16:09.203955 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8" Jan 27 12:16:09 crc kubenswrapper[4745]: I0127 12:16:09.206505 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" event={"ID":"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7","Type":"ContainerStarted","Data":"3bd6842fc96bd42fd11ef26f4cff151a116e2e813499a4258e7911f3d51b7750"} Jan 27 12:16:09 crc kubenswrapper[4745]: I0127 12:16:09.211244 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d1170ea3-dafb-4e36-b873-bc1e339b86b9","Type":"ContainerStarted","Data":"7463332bf0580a8be6b9560f125ff32c844b92974af4f23c6183017031935b88"} Jan 27 12:16:09 crc kubenswrapper[4745]: I0127 12:16:09.215176 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hbsbc" event={"ID":"478908d6-765e-4bd8-a3ef-3142a7641a3b","Type":"ContainerStarted","Data":"15d98d596480ad3dbdafb8cbb62a0ef4c2f0f9bec3a32fa2d982ae0eca76764c"} Jan 27 12:16:09 crc kubenswrapper[4745]: I0127 12:16:09.220717 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p"] Jan 27 12:16:09 crc kubenswrapper[4745]: I0127 12:16:09.226039 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b2dce2e2-534d-417d-a4f7-d945631a53b4","Type":"ContainerStarted","Data":"5bbe6cead109d6aafd4aaf51ba06d593b27061868d12ce039283ca63dbea8405"} Jan 27 12:16:09 crc kubenswrapper[4745]: I0127 12:16:09.237762 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8"] Jan 27 12:16:09 crc kubenswrapper[4745]: I0127 12:16:09.241655 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8" event={"ID":"fe4ab457-bf86-43e0-898e-d7d1b5965142","Type":"ContainerStarted","Data":"5a6c3bf3ab4492e900d7eb844f7cd375caab681a94573ae3c7f9113d7cbcff5e"} Jan 27 12:16:09 crc kubenswrapper[4745]: I0127 12:16:09.241732 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-548d5b954d-pxxs8"] Jan 27 12:16:09 crc kubenswrapper[4745]: E0127 12:16:09.242500 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9tkgm" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" Jan 27 12:16:10 crc kubenswrapper[4745]: I0127 12:16:10.081468 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6067d6a1-495e-4980-8189-ccdc43df1a37" path="/var/lib/kubelet/pods/6067d6a1-495e-4980-8189-ccdc43df1a37/volumes" Jan 27 12:16:10 crc kubenswrapper[4745]: I0127 12:16:10.246218 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8" event={"ID":"fe4ab457-bf86-43e0-898e-d7d1b5965142","Type":"ContainerStarted","Data":"98de81cd3cf99a456d6f032e8b1e10e974d3d59178e978e3724c124977df0e76"} Jan 27 12:16:10 crc kubenswrapper[4745]: I0127 12:16:10.249155 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" event={"ID":"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7","Type":"ContainerStarted","Data":"f376bbcd622cc7b4208bbe8df375c214f4668876ba745a56f69cbeb55a2489f5"} Jan 27 12:16:10 crc kubenswrapper[4745]: I0127 12:16:10.249427 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" Jan 27 12:16:10 crc kubenswrapper[4745]: I0127 12:16:10.251537 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d1170ea3-dafb-4e36-b873-bc1e339b86b9","Type":"ContainerStarted","Data":"a722ac0c6f8264347fa9e601fd4dd66fafd6e6524d379e7416467edc2b15d0ba"} Jan 27 12:16:10 crc kubenswrapper[4745]: I0127 12:16:10.253505 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" event={"ID":"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9","Type":"ContainerStarted","Data":"2bc895b11b266e2683da00535afeea4f601fe625bea42c0fb7443934f74f12ab"} Jan 27 12:16:10 crc kubenswrapper[4745]: I0127 12:16:10.253554 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" event={"ID":"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9","Type":"ContainerStarted","Data":"39671989f37ebe7c0551a560fecca8e8042ca0c476446925538d5de1b06aaae7"} Jan 27 12:16:10 crc kubenswrapper[4745]: I0127 12:16:10.253698 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" Jan 27 12:16:10 crc kubenswrapper[4745]: I0127 12:16:10.254167 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" Jan 27 12:16:10 crc kubenswrapper[4745]: I0127 12:16:10.257485 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerStarted","Data":"99d92afaff4d5da46033fc226ce0aba0f0ba990de6f690349b869b38b7d1aea9"} Jan 27 12:16:10 crc kubenswrapper[4745]: I0127 12:16:10.260082 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b2dce2e2-534d-417d-a4f7-d945631a53b4","Type":"ContainerStarted","Data":"698be32144fc02102c3762b2d89307557cbf34c35b7af36fa8331ca5ff0d5ab1"} Jan 27 12:16:10 crc kubenswrapper[4745]: I0127 12:16:10.260147 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hbsbc" Jan 27 12:16:10 crc kubenswrapper[4745]: I0127 12:16:10.260652 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:16:10 crc kubenswrapper[4745]: I0127 12:16:10.260694 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:16:10 crc kubenswrapper[4745]: I0127 12:16:10.267109 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8" podStartSLOduration=70.267091639 podStartE2EDuration="1m10.267091639s" podCreationTimestamp="2026-01-27 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:16:10.26615906 +0000 UTC m=+263.071069748" watchObservedRunningTime="2026-01-27 12:16:10.267091639 +0000 UTC m=+263.072002327" Jan 27 12:16:10 crc kubenswrapper[4745]: I0127 12:16:10.301722 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" Jan 27 12:16:10 crc kubenswrapper[4745]: I0127 12:16:10.308403 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" podStartSLOduration=74.308384818 podStartE2EDuration="1m14.308384818s" podCreationTimestamp="2026-01-27 12:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:16:10.308090882 +0000 UTC m=+263.113001570" watchObservedRunningTime="2026-01-27 12:16:10.308384818 +0000 UTC m=+263.113295506" Jan 27 12:16:10 crc kubenswrapper[4745]: I0127 12:16:10.322231 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=53.322212736 podStartE2EDuration="53.322212736s" podCreationTimestamp="2026-01-27 12:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:16:10.322142114 +0000 UTC m=+263.127052812" watchObservedRunningTime="2026-01-27 12:16:10.322212736 +0000 UTC m=+263.127123424" Jan 27 12:16:10 crc kubenswrapper[4745]: I0127 12:16:10.342696 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" podStartSLOduration=74.342679337 podStartE2EDuration="1m14.342679337s" podCreationTimestamp="2026-01-27 12:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:16:10.342219917 +0000 UTC m=+263.147130615" watchObservedRunningTime="2026-01-27 12:16:10.342679337 +0000 UTC m=+263.147590025" Jan 27 12:16:10 crc kubenswrapper[4745]: I0127 12:16:10.364721 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=49.364700119 podStartE2EDuration="49.364700119s" podCreationTimestamp="2026-01-27 12:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:16:10.361945513 +0000 UTC m=+263.166856201" watchObservedRunningTime="2026-01-27 12:16:10.364700119 +0000 UTC m=+263.169610817" Jan 27 12:16:10 crc kubenswrapper[4745]: E0127 12:16:10.511350 4745 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe4ab457_bf86_43e0_898e_d7d1b5965142.slice/crio-conmon-98de81cd3cf99a456d6f032e8b1e10e974d3d59178e978e3724c124977df0e76.scope\": RecentStats: unable to find data in memory cache]" Jan 27 12:16:11 crc kubenswrapper[4745]: I0127 12:16:11.266797 4745 generic.go:334] "Generic (PLEG): container finished" podID="fe4ab457-bf86-43e0-898e-d7d1b5965142" containerID="98de81cd3cf99a456d6f032e8b1e10e974d3d59178e978e3724c124977df0e76" exitCode=0 Jan 27 12:16:11 crc kubenswrapper[4745]: I0127 12:16:11.267068 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8" event={"ID":"fe4ab457-bf86-43e0-898e-d7d1b5965142","Type":"ContainerDied","Data":"98de81cd3cf99a456d6f032e8b1e10e974d3d59178e978e3724c124977df0e76"} Jan 27 12:16:11 crc kubenswrapper[4745]: I0127 12:16:11.267799 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:16:11 crc kubenswrapper[4745]: I0127 12:16:11.267886 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:16:12 crc kubenswrapper[4745]: I0127 12:16:12.274506 4745 generic.go:334] "Generic (PLEG): container finished" podID="b2dce2e2-534d-417d-a4f7-d945631a53b4" containerID="698be32144fc02102c3762b2d89307557cbf34c35b7af36fa8331ca5ff0d5ab1" exitCode=0 Jan 27 12:16:12 crc kubenswrapper[4745]: I0127 12:16:12.274672 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b2dce2e2-534d-417d-a4f7-d945631a53b4","Type":"ContainerDied","Data":"698be32144fc02102c3762b2d89307557cbf34c35b7af36fa8331ca5ff0d5ab1"} Jan 27 12:16:16 crc kubenswrapper[4745]: I0127 12:16:16.520951 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:16:16 crc kubenswrapper[4745]: I0127 12:16:16.520966 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbsbc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 12:16:16 crc kubenswrapper[4745]: I0127 12:16:16.521477 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:16:16 crc kubenswrapper[4745]: I0127 12:16:16.521421 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbsbc" podUID="478908d6-765e-4bd8-a3ef-3142a7641a3b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 12:16:26 crc kubenswrapper[4745]: I0127 12:16:26.533564 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-hbsbc" Jan 27 12:16:33 crc kubenswrapper[4745]: E0127 12:16:27.110768 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2c9wm" podUID="3fcec544-9ef8-406d-9f01-b3ceabf2b033" Jan 27 12:16:33 crc kubenswrapper[4745]: E0127 12:16:27.120506 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-zhnbq" podUID="d2b41701-5113-4970-8d93-157bf16b3c06" Jan 27 12:16:33 crc kubenswrapper[4745]: I0127 12:16:27.177221 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8" Jan 27 12:16:33 crc kubenswrapper[4745]: I0127 12:16:27.245334 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe4ab457-bf86-43e0-898e-d7d1b5965142-secret-volume\") pod \"fe4ab457-bf86-43e0-898e-d7d1b5965142\" (UID: \"fe4ab457-bf86-43e0-898e-d7d1b5965142\") " Jan 27 12:16:33 crc kubenswrapper[4745]: I0127 12:16:27.245406 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccxfm\" (UniqueName: \"kubernetes.io/projected/fe4ab457-bf86-43e0-898e-d7d1b5965142-kube-api-access-ccxfm\") pod \"fe4ab457-bf86-43e0-898e-d7d1b5965142\" (UID: \"fe4ab457-bf86-43e0-898e-d7d1b5965142\") " Jan 27 12:16:33 crc kubenswrapper[4745]: I0127 12:16:27.245458 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe4ab457-bf86-43e0-898e-d7d1b5965142-config-volume\") pod \"fe4ab457-bf86-43e0-898e-d7d1b5965142\" (UID: \"fe4ab457-bf86-43e0-898e-d7d1b5965142\") " Jan 27 12:16:33 crc kubenswrapper[4745]: I0127 12:16:27.246400 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe4ab457-bf86-43e0-898e-d7d1b5965142-config-volume" (OuterVolumeSpecName: "config-volume") pod "fe4ab457-bf86-43e0-898e-d7d1b5965142" (UID: "fe4ab457-bf86-43e0-898e-d7d1b5965142"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:16:33 crc kubenswrapper[4745]: I0127 12:16:27.251071 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe4ab457-bf86-43e0-898e-d7d1b5965142-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fe4ab457-bf86-43e0-898e-d7d1b5965142" (UID: "fe4ab457-bf86-43e0-898e-d7d1b5965142"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:16:33 crc kubenswrapper[4745]: I0127 12:16:27.251758 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe4ab457-bf86-43e0-898e-d7d1b5965142-kube-api-access-ccxfm" (OuterVolumeSpecName: "kube-api-access-ccxfm") pod "fe4ab457-bf86-43e0-898e-d7d1b5965142" (UID: "fe4ab457-bf86-43e0-898e-d7d1b5965142"). InnerVolumeSpecName "kube-api-access-ccxfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:16:33 crc kubenswrapper[4745]: I0127 12:16:27.346832 4745 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe4ab457-bf86-43e0-898e-d7d1b5965142-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 12:16:33 crc kubenswrapper[4745]: I0127 12:16:27.346882 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccxfm\" (UniqueName: \"kubernetes.io/projected/fe4ab457-bf86-43e0-898e-d7d1b5965142-kube-api-access-ccxfm\") on node \"crc\" DevicePath \"\"" Jan 27 12:16:33 crc kubenswrapper[4745]: I0127 12:16:27.346896 4745 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe4ab457-bf86-43e0-898e-d7d1b5965142-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 12:16:33 crc kubenswrapper[4745]: I0127 12:16:27.372046 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8" event={"ID":"fe4ab457-bf86-43e0-898e-d7d1b5965142","Type":"ContainerDied","Data":"5a6c3bf3ab4492e900d7eb844f7cd375caab681a94573ae3c7f9113d7cbcff5e"} Jan 27 12:16:33 crc kubenswrapper[4745]: I0127 12:16:27.372093 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a6c3bf3ab4492e900d7eb844f7cd375caab681a94573ae3c7f9113d7cbcff5e" Jan 27 12:16:33 crc kubenswrapper[4745]: I0127 12:16:27.372099 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.581438 4745 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 12:16:47 crc kubenswrapper[4745]: E0127 12:16:47.582079 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4ab457-bf86-43e0-898e-d7d1b5965142" containerName="collect-profiles" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.582091 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4ab457-bf86-43e0-898e-d7d1b5965142" containerName="collect-profiles" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.582188 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe4ab457-bf86-43e0-898e-d7d1b5965142" containerName="collect-profiles" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.582517 4745 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.582760 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466" gracePeriod=15 Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.582897 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b" gracePeriod=15 Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.582885 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0a075761afe1cb61a52a5a40380f0fd9eef24c6f1edc5e9cb4c1a70a6039fdce" gracePeriod=15 Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.582945 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19" gracePeriod=15 Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.582935 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657" gracePeriod=15 Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.582798 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.584267 4745 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 12:16:47 crc kubenswrapper[4745]: E0127 12:16:47.584646 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.584667 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 12:16:47 crc kubenswrapper[4745]: E0127 12:16:47.584679 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.584691 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 12:16:47 crc kubenswrapper[4745]: E0127 12:16:47.584707 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.584720 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 12:16:47 crc kubenswrapper[4745]: E0127 12:16:47.584736 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.584748 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 12:16:47 crc kubenswrapper[4745]: E0127 12:16:47.584764 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.584775 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 12:16:47 crc kubenswrapper[4745]: E0127 12:16:47.584789 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.584800 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 12:16:47 crc kubenswrapper[4745]: E0127 12:16:47.584835 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.584846 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.585011 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.585031 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.585047 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.585060 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.585074 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.585086 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.585105 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 12:16:47 crc kubenswrapper[4745]: E0127 12:16:47.585268 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.585281 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.623126 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.623611 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.623662 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.623783 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.623837 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.623900 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.623973 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.624022 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.725750 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.725795 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.725879 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.725922 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.725943 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.725969 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.725996 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.726022 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.726108 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.726153 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.726182 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.726209 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.726237 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.726264 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.726291 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.726316 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:16:47 crc kubenswrapper[4745]: I0127 12:16:47.824231 4745 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 12:16:48 crc kubenswrapper[4745]: I0127 12:16:48.080989 4745 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:48 crc kubenswrapper[4745]: I0127 12:16:48.081314 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:48 crc kubenswrapper[4745]: I0127 12:16:48.501927 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 12:16:48 crc kubenswrapper[4745]: I0127 12:16:48.504170 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 12:16:48 crc kubenswrapper[4745]: I0127 12:16:48.505285 4745 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0a075761afe1cb61a52a5a40380f0fd9eef24c6f1edc5e9cb4c1a70a6039fdce" exitCode=0 Jan 27 12:16:48 crc kubenswrapper[4745]: I0127 12:16:48.505321 4745 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b" exitCode=0 Jan 27 12:16:48 crc kubenswrapper[4745]: I0127 12:16:48.505334 4745 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657" exitCode=0 Jan 27 12:16:48 crc kubenswrapper[4745]: I0127 12:16:48.505345 4745 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19" exitCode=2 Jan 27 12:16:48 crc kubenswrapper[4745]: I0127 12:16:48.505386 4745 scope.go:117] "RemoveContainer" containerID="4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e" Jan 27 12:16:48 crc kubenswrapper[4745]: I0127 12:16:48.508023 4745 generic.go:334] "Generic (PLEG): container finished" podID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" containerID="a722ac0c6f8264347fa9e601fd4dd66fafd6e6524d379e7416467edc2b15d0ba" exitCode=0 Jan 27 12:16:48 crc kubenswrapper[4745]: I0127 12:16:48.508068 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d1170ea3-dafb-4e36-b873-bc1e339b86b9","Type":"ContainerDied","Data":"a722ac0c6f8264347fa9e601fd4dd66fafd6e6524d379e7416467edc2b15d0ba"} Jan 27 12:16:48 crc kubenswrapper[4745]: I0127 12:16:48.508881 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:48 crc kubenswrapper[4745]: I0127 12:16:48.509315 4745 status_manager.go:851] "Failed to get status for pod" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:50 crc kubenswrapper[4745]: I0127 12:16:50.524119 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 12:16:50 crc kubenswrapper[4745]: I0127 12:16:50.525290 4745 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466" exitCode=0 Jan 27 12:16:52 crc kubenswrapper[4745]: E0127 12:16:52.221312 4745 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/redhat-operators-2c9wm.188e9580b37141fd\": dial tcp 38.129.56.233:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-2c9wm.188e9580b37141fd openshift-marketplace 29494 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-2c9wm,UID:3fcec544-9ef8-406d-9f01-b3ceabf2b033,APIVersion:v1,ResourceVersion:28504,FieldPath:spec.initContainers{extract-content},},Reason:Pulling,Message:Pulling image \"registry.redhat.io/redhat/redhat-operator-index:v4.18\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 12:14:43 +0000 UTC,LastTimestamp:2026-01-27 12:16:52.218408675 +0000 UTC m=+305.023319393,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.259618 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.260304 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.260695 4745 status_manager.go:851] "Failed to get status for pod" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.315413 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2dce2e2-534d-417d-a4f7-d945631a53b4-kube-api-access\") pod \"b2dce2e2-534d-417d-a4f7-d945631a53b4\" (UID: \"b2dce2e2-534d-417d-a4f7-d945631a53b4\") " Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.315468 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2dce2e2-534d-417d-a4f7-d945631a53b4-kubelet-dir\") pod \"b2dce2e2-534d-417d-a4f7-d945631a53b4\" (UID: \"b2dce2e2-534d-417d-a4f7-d945631a53b4\") " Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.315992 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2dce2e2-534d-417d-a4f7-d945631a53b4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b2dce2e2-534d-417d-a4f7-d945631a53b4" (UID: "b2dce2e2-534d-417d-a4f7-d945631a53b4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.323245 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2dce2e2-534d-417d-a4f7-d945631a53b4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b2dce2e2-534d-417d-a4f7-d945631a53b4" (UID: "b2dce2e2-534d-417d-a4f7-d945631a53b4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.416970 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2dce2e2-534d-417d-a4f7-d945631a53b4-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.417007 4745 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2dce2e2-534d-417d-a4f7-d945631a53b4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.539019 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b2dce2e2-534d-417d-a4f7-d945631a53b4","Type":"ContainerDied","Data":"5bbe6cead109d6aafd4aaf51ba06d593b27061868d12ce039283ca63dbea8405"} Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.539078 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bbe6cead109d6aafd4aaf51ba06d593b27061868d12ce039283ca63dbea8405" Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.539094 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.564386 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.566554 4745 status_manager.go:851] "Failed to get status for pod" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.569855 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.570477 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.570975 4745 status_manager.go:851] "Failed to get status for pod" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.624443 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1170ea3-dafb-4e36-b873-bc1e339b86b9-kubelet-dir\") pod \"d1170ea3-dafb-4e36-b873-bc1e339b86b9\" (UID: \"d1170ea3-dafb-4e36-b873-bc1e339b86b9\") " Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.624530 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1170ea3-dafb-4e36-b873-bc1e339b86b9-kube-api-access\") pod \"d1170ea3-dafb-4e36-b873-bc1e339b86b9\" (UID: \"d1170ea3-dafb-4e36-b873-bc1e339b86b9\") " Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.624595 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d1170ea3-dafb-4e36-b873-bc1e339b86b9-var-lock\") pod \"d1170ea3-dafb-4e36-b873-bc1e339b86b9\" (UID: \"d1170ea3-dafb-4e36-b873-bc1e339b86b9\") " Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.624848 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1170ea3-dafb-4e36-b873-bc1e339b86b9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d1170ea3-dafb-4e36-b873-bc1e339b86b9" (UID: "d1170ea3-dafb-4e36-b873-bc1e339b86b9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.624930 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1170ea3-dafb-4e36-b873-bc1e339b86b9-var-lock" (OuterVolumeSpecName: "var-lock") pod "d1170ea3-dafb-4e36-b873-bc1e339b86b9" (UID: "d1170ea3-dafb-4e36-b873-bc1e339b86b9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.628452 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1170ea3-dafb-4e36-b873-bc1e339b86b9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d1170ea3-dafb-4e36-b873-bc1e339b86b9" (UID: "d1170ea3-dafb-4e36-b873-bc1e339b86b9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:16:52 crc kubenswrapper[4745]: E0127 12:16:52.635519 4745 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.233:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.636591 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.726310 4745 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1170ea3-dafb-4e36-b873-bc1e339b86b9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.726353 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1170ea3-dafb-4e36-b873-bc1e339b86b9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 12:16:52 crc kubenswrapper[4745]: I0127 12:16:52.726366 4745 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d1170ea3-dafb-4e36-b873-bc1e339b86b9-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 12:16:53 crc kubenswrapper[4745]: I0127 12:16:53.548423 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d1170ea3-dafb-4e36-b873-bc1e339b86b9","Type":"ContainerDied","Data":"7463332bf0580a8be6b9560f125ff32c844b92974af4f23c6183017031935b88"} Jan 27 12:16:53 crc kubenswrapper[4745]: I0127 12:16:53.548749 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7463332bf0580a8be6b9560f125ff32c844b92974af4f23c6183017031935b88" Jan 27 12:16:53 crc kubenswrapper[4745]: I0127 12:16:53.548525 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 12:16:53 crc kubenswrapper[4745]: I0127 12:16:53.560281 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:53 crc kubenswrapper[4745]: I0127 12:16:53.560561 4745 status_manager.go:851] "Failed to get status for pod" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:54 crc kubenswrapper[4745]: I0127 12:16:54.380573 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 12:16:54 crc kubenswrapper[4745]: I0127 12:16:54.381368 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:16:54 crc kubenswrapper[4745]: I0127 12:16:54.382191 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:54 crc kubenswrapper[4745]: I0127 12:16:54.382545 4745 status_manager.go:851] "Failed to get status for pod" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:54 crc kubenswrapper[4745]: I0127 12:16:54.382950 4745 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:54 crc kubenswrapper[4745]: I0127 12:16:54.450260 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 12:16:54 crc kubenswrapper[4745]: I0127 12:16:54.450407 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:16:54 crc kubenswrapper[4745]: I0127 12:16:54.451185 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 12:16:54 crc kubenswrapper[4745]: I0127 12:16:54.451263 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:16:54 crc kubenswrapper[4745]: I0127 12:16:54.451301 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 12:16:54 crc kubenswrapper[4745]: I0127 12:16:54.451365 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:16:54 crc kubenswrapper[4745]: I0127 12:16:54.451656 4745 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 12:16:54 crc kubenswrapper[4745]: I0127 12:16:54.451674 4745 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 12:16:54 crc kubenswrapper[4745]: I0127 12:16:54.451683 4745 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 12:16:54 crc kubenswrapper[4745]: I0127 12:16:54.559053 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 12:16:54 crc kubenswrapper[4745]: I0127 12:16:54.560101 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:16:54 crc kubenswrapper[4745]: I0127 12:16:54.575032 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:54 crc kubenswrapper[4745]: I0127 12:16:54.575406 4745 status_manager.go:851] "Failed to get status for pod" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:54 crc kubenswrapper[4745]: I0127 12:16:54.575784 4745 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:56 crc kubenswrapper[4745]: I0127 12:16:56.083124 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 12:16:56 crc kubenswrapper[4745]: I0127 12:16:56.552010 4745 scope.go:117] "RemoveContainer" containerID="0a075761afe1cb61a52a5a40380f0fd9eef24c6f1edc5e9cb4c1a70a6039fdce" Jan 27 12:16:56 crc kubenswrapper[4745]: E0127 12:16:56.842418 4745 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:56 crc kubenswrapper[4745]: E0127 12:16:56.843104 4745 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:56 crc kubenswrapper[4745]: E0127 12:16:56.843883 4745 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:56 crc kubenswrapper[4745]: E0127 12:16:56.844899 4745 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:56 crc kubenswrapper[4745]: E0127 12:16:56.845355 4745 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:56 crc kubenswrapper[4745]: I0127 12:16:56.845425 4745 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 12:16:56 crc kubenswrapper[4745]: E0127 12:16:56.845722 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.233:6443: connect: connection refused" interval="200ms" Jan 27 12:16:57 crc kubenswrapper[4745]: E0127 12:16:57.048311 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.233:6443: connect: connection refused" interval="400ms" Jan 27 12:16:57 crc kubenswrapper[4745]: E0127 12:16:57.449750 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.233:6443: connect: connection refused" interval="800ms" Jan 27 12:16:58 crc kubenswrapper[4745]: I0127 12:16:58.078073 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:58 crc kubenswrapper[4745]: I0127 12:16:58.078677 4745 status_manager.go:851] "Failed to get status for pod" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:16:58 crc kubenswrapper[4745]: E0127 12:16:58.250474 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.233:6443: connect: connection refused" interval="1.6s" Jan 27 12:16:58 crc kubenswrapper[4745]: E0127 12:16:58.848620 4745 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/redhat-operators-2c9wm.188e9580b37141fd\": dial tcp 38.129.56.233:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-2c9wm.188e9580b37141fd openshift-marketplace 29494 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-2c9wm,UID:3fcec544-9ef8-406d-9f01-b3ceabf2b033,APIVersion:v1,ResourceVersion:28504,FieldPath:spec.initContainers{extract-content},},Reason:Pulling,Message:Pulling image \"registry.redhat.io/redhat/redhat-operator-index:v4.18\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 12:14:43 +0000 UTC,LastTimestamp:2026-01-27 12:16:52.218408675 +0000 UTC m=+305.023319393,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 12:16:59 crc kubenswrapper[4745]: E0127 12:16:59.852183 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.233:6443: connect: connection refused" interval="3.2s" Jan 27 12:17:01 crc kubenswrapper[4745]: I0127 12:17:01.073593 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:17:01 crc kubenswrapper[4745]: I0127 12:17:01.074971 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:01 crc kubenswrapper[4745]: I0127 12:17:01.075688 4745 status_manager.go:851] "Failed to get status for pod" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:01 crc kubenswrapper[4745]: I0127 12:17:01.097087 4745 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c4c7cda7-14d9-4e22-82b9-f36bda68c36a" Jan 27 12:17:01 crc kubenswrapper[4745]: I0127 12:17:01.097124 4745 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c4c7cda7-14d9-4e22-82b9-f36bda68c36a" Jan 27 12:17:01 crc kubenswrapper[4745]: E0127 12:17:01.097681 4745 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:17:01 crc kubenswrapper[4745]: I0127 12:17:01.098585 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:17:01 crc kubenswrapper[4745]: I0127 12:17:01.634078 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 12:17:01 crc kubenswrapper[4745]: I0127 12:17:01.634499 4745 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a" exitCode=1 Jan 27 12:17:01 crc kubenswrapper[4745]: I0127 12:17:01.634535 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a"} Jan 27 12:17:01 crc kubenswrapper[4745]: I0127 12:17:01.635189 4745 scope.go:117] "RemoveContainer" containerID="947b6f1abace8660f31f41e365970298544064522231fe2e576afb5dfd73822a" Jan 27 12:17:01 crc kubenswrapper[4745]: I0127 12:17:01.635776 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:01 crc kubenswrapper[4745]: I0127 12:17:01.636964 4745 status_manager.go:851] "Failed to get status for pod" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:01 crc kubenswrapper[4745]: I0127 12:17:01.637670 4745 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:03 crc kubenswrapper[4745]: E0127 12:17:03.053696 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.233:6443: connect: connection refused" interval="6.4s" Jan 27 12:17:04 crc kubenswrapper[4745]: I0127 12:17:04.350096 4745 scope.go:117] "RemoveContainer" containerID="4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e" Jan 27 12:17:04 crc kubenswrapper[4745]: E0127 12:17:04.351162 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\": container with ID starting with 4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e not found: ID does not exist" containerID="4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e" Jan 27 12:17:04 crc kubenswrapper[4745]: I0127 12:17:04.351197 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e"} err="failed to get container status \"4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\": rpc error: code = NotFound desc = could not find container \"4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e\": container with ID starting with 4e6beef680970c8f9d1b9c36208d9db8b0696c658c59eb531ca4b85838a13e7e not found: ID does not exist" Jan 27 12:17:04 crc kubenswrapper[4745]: I0127 12:17:04.351219 4745 scope.go:117] "RemoveContainer" containerID="c4519800276cfa55a109f35d5edddbfa7835da7646d533af751e8c3d43972c2b" Jan 27 12:17:04 crc kubenswrapper[4745]: W0127 12:17:04.544515 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-434734a8b21b10abd7cb4d0ec1bd7293f73ff10786df6b9c15f5829447071786 WatchSource:0}: Error finding container 434734a8b21b10abd7cb4d0ec1bd7293f73ff10786df6b9c15f5829447071786: Status 404 returned error can't find the container with id 434734a8b21b10abd7cb4d0ec1bd7293f73ff10786df6b9c15f5829447071786 Jan 27 12:17:04 crc kubenswrapper[4745]: W0127 12:17:04.591475 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-b5cce0f343f3a4b36674a3d0d436348e07af166fa830b43912fbcd10dbd94a64 WatchSource:0}: Error finding container b5cce0f343f3a4b36674a3d0d436348e07af166fa830b43912fbcd10dbd94a64: Status 404 returned error can't find the container with id b5cce0f343f3a4b36674a3d0d436348e07af166fa830b43912fbcd10dbd94a64 Jan 27 12:17:04 crc kubenswrapper[4745]: I0127 12:17:04.642043 4745 scope.go:117] "RemoveContainer" containerID="9ffdffe3644f09c3864793fc426ac634e4e28f56b1bc1ff3c26b42520c56e657" Jan 27 12:17:04 crc kubenswrapper[4745]: I0127 12:17:04.650486 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b5cce0f343f3a4b36674a3d0d436348e07af166fa830b43912fbcd10dbd94a64"} Jan 27 12:17:04 crc kubenswrapper[4745]: I0127 12:17:04.654666 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"434734a8b21b10abd7cb4d0ec1bd7293f73ff10786df6b9c15f5829447071786"} Jan 27 12:17:04 crc kubenswrapper[4745]: I0127 12:17:04.728561 4745 scope.go:117] "RemoveContainer" containerID="ae5108126c9e46fedf696565aacddc4af615c42077cecc98bb584493a8efdd19" Jan 27 12:17:04 crc kubenswrapper[4745]: I0127 12:17:04.766008 4745 scope.go:117] "RemoveContainer" containerID="f8602b373431574787288e879f6ac52284dcb44eb175cdcd36b2e438dc1c5466" Jan 27 12:17:04 crc kubenswrapper[4745]: I0127 12:17:04.824452 4745 scope.go:117] "RemoveContainer" containerID="72c57651834e1a471e94d352b5dc64c1406d7ff2722faa2c157fa95b3a53ff5f" Jan 27 12:17:04 crc kubenswrapper[4745]: I0127 12:17:04.937086 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.667563 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2aa387d05eb73f5ff660492280b771a6e6bf0d682704cf2c55afc23f19fa08fc"} Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.672241 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c9wm" event={"ID":"3fcec544-9ef8-406d-9f01-b3ceabf2b033","Type":"ContainerStarted","Data":"63a54136d8fcb6b44e19c902c273c70c867dd941be0ee34492d3074359537ab4"} Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.674266 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tkgm" event={"ID":"7ff89667-3b76-4571-a07b-d43bce0a2e5b","Type":"ContainerStarted","Data":"08efc721326ad7b1afe48e8eebdf9c75e8b377863f7f8e6ceb30bcd2332d42a9"} Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.675296 4745 status_manager.go:851] "Failed to get status for pod" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" pod="openshift-marketplace/redhat-operators-9tkgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9tkgm\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.675747 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.676228 4745 status_manager.go:851] "Failed to get status for pod" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.676515 4745 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.677587 4745 generic.go:334] "Generic (PLEG): container finished" podID="341a8942-834f-4f76-8269-7ecdecaaa1b0" containerID="1c1ca6841297b0075b5ad02fc7f84c079ae0dcbc97fbc61a6c7507b74306916c" exitCode=0 Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.677629 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzw6b" event={"ID":"341a8942-834f-4f76-8269-7ecdecaaa1b0","Type":"ContainerDied","Data":"1c1ca6841297b0075b5ad02fc7f84c079ae0dcbc97fbc61a6c7507b74306916c"} Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.678748 4745 status_manager.go:851] "Failed to get status for pod" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" pod="openshift-marketplace/redhat-operators-9tkgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9tkgm\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.679094 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.679303 4745 status_manager.go:851] "Failed to get status for pod" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.679580 4745 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.679757 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a694fbe9d52a2e69832c5349c91f5d57e2d8fa622cff24ee793e2fa1000eac8e"} Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.679870 4745 status_manager.go:851] "Failed to get status for pod" podUID="341a8942-834f-4f76-8269-7ecdecaaa1b0" pod="openshift-marketplace/community-operators-tzw6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tzw6b\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.684485 4745 generic.go:334] "Generic (PLEG): container finished" podID="64c43381-42e2-4e01-9559-70c3c56070ea" containerID="4a2a69222a5f2ea8d3286144487e6f060f76c37177144f4d7b065471e07ec3ae" exitCode=0 Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.684551 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-255wr" event={"ID":"64c43381-42e2-4e01-9559-70c3c56070ea","Type":"ContainerDied","Data":"4a2a69222a5f2ea8d3286144487e6f060f76c37177144f4d7b065471e07ec3ae"} Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.685468 4745 status_manager.go:851] "Failed to get status for pod" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" pod="openshift-marketplace/redhat-operators-9tkgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9tkgm\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.686165 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.686448 4745 status_manager.go:851] "Failed to get status for pod" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.686720 4745 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.687050 4745 status_manager.go:851] "Failed to get status for pod" podUID="341a8942-834f-4f76-8269-7ecdecaaa1b0" pod="openshift-marketplace/community-operators-tzw6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tzw6b\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.688165 4745 status_manager.go:851] "Failed to get status for pod" podUID="64c43381-42e2-4e01-9559-70c3c56070ea" pod="openshift-marketplace/certified-operators-255wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-255wr\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.689874 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.690000 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5fb17901e33a8631d00af26f73f0cb8f4372550a7327f2836d9b1cbeecb37322"} Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.691460 4745 status_manager.go:851] "Failed to get status for pod" podUID="341a8942-834f-4f76-8269-7ecdecaaa1b0" pod="openshift-marketplace/community-operators-tzw6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tzw6b\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.691775 4745 status_manager.go:851] "Failed to get status for pod" podUID="64c43381-42e2-4e01-9559-70c3c56070ea" pod="openshift-marketplace/certified-operators-255wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-255wr\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.692006 4745 status_manager.go:851] "Failed to get status for pod" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" pod="openshift-marketplace/redhat-operators-9tkgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9tkgm\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.692168 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.692417 4745 status_manager.go:851] "Failed to get status for pod" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.692629 4745 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.694084 4745 generic.go:334] "Generic (PLEG): container finished" podID="36154dea-ca68-4ca6-8e2f-83a669152ca7" containerID="1c663205a9fc19b47eac1baa7e513608ebbc5725da351e4f4bdcef26baa21223" exitCode=0 Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.694150 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzw5q" event={"ID":"36154dea-ca68-4ca6-8e2f-83a669152ca7","Type":"ContainerDied","Data":"1c663205a9fc19b47eac1baa7e513608ebbc5725da351e4f4bdcef26baa21223"} Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.695569 4745 status_manager.go:851] "Failed to get status for pod" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" pod="openshift-marketplace/redhat-operators-9tkgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9tkgm\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.695977 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.696474 4745 status_manager.go:851] "Failed to get status for pod" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.696734 4745 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.698279 4745 status_manager.go:851] "Failed to get status for pod" podUID="341a8942-834f-4f76-8269-7ecdecaaa1b0" pod="openshift-marketplace/community-operators-tzw6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tzw6b\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.698608 4745 status_manager.go:851] "Failed to get status for pod" podUID="64c43381-42e2-4e01-9559-70c3c56070ea" pod="openshift-marketplace/certified-operators-255wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-255wr\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.698890 4745 status_manager.go:851] "Failed to get status for pod" podUID="36154dea-ca68-4ca6-8e2f-83a669152ca7" pod="openshift-marketplace/redhat-marketplace-fzw5q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fzw5q\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.707548 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhnbq" event={"ID":"d2b41701-5113-4970-8d93-157bf16b3c06","Type":"ContainerStarted","Data":"c9845d62431f87b97a415efa7aa9aefa2b825cd0cc760c633bd9da5bfe028a63"} Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.716560 4745 generic.go:334] "Generic (PLEG): container finished" podID="7c6f4dda-1294-4903-a4c1-6685307c3b25" containerID="3bf365a7e2e65357db9a2fc4d87e2a66330a34323434f16c0c43668f5caa3e08" exitCode=0 Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.716623 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmx2n" event={"ID":"7c6f4dda-1294-4903-a4c1-6685307c3b25","Type":"ContainerDied","Data":"3bf365a7e2e65357db9a2fc4d87e2a66330a34323434f16c0c43668f5caa3e08"} Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.717537 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.717935 4745 status_manager.go:851] "Failed to get status for pod" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.718211 4745 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.718405 4745 status_manager.go:851] "Failed to get status for pod" podUID="341a8942-834f-4f76-8269-7ecdecaaa1b0" pod="openshift-marketplace/community-operators-tzw6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tzw6b\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.718557 4745 status_manager.go:851] "Failed to get status for pod" podUID="64c43381-42e2-4e01-9559-70c3c56070ea" pod="openshift-marketplace/certified-operators-255wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-255wr\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.718710 4745 status_manager.go:851] "Failed to get status for pod" podUID="36154dea-ca68-4ca6-8e2f-83a669152ca7" pod="openshift-marketplace/redhat-marketplace-fzw5q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fzw5q\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.718867 4745 status_manager.go:851] "Failed to get status for pod" podUID="7c6f4dda-1294-4903-a4c1-6685307c3b25" pod="openshift-marketplace/certified-operators-bmx2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bmx2n\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.719034 4745 status_manager.go:851] "Failed to get status for pod" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" pod="openshift-marketplace/redhat-operators-9tkgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9tkgm\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.723120 4745 generic.go:334] "Generic (PLEG): container finished" podID="6d114857-b077-4798-b578-b9a15645d31f" containerID="0f5d8dc9b636a5e3f071e28dc44f9c33f273e76715d4cb5c5008f733fcf569ee" exitCode=0 Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.723165 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hw272" event={"ID":"6d114857-b077-4798-b578-b9a15645d31f","Type":"ContainerDied","Data":"0f5d8dc9b636a5e3f071e28dc44f9c33f273e76715d4cb5c5008f733fcf569ee"} Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.724000 4745 status_manager.go:851] "Failed to get status for pod" podUID="341a8942-834f-4f76-8269-7ecdecaaa1b0" pod="openshift-marketplace/community-operators-tzw6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tzw6b\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.724399 4745 status_manager.go:851] "Failed to get status for pod" podUID="64c43381-42e2-4e01-9559-70c3c56070ea" pod="openshift-marketplace/certified-operators-255wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-255wr\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.724799 4745 status_manager.go:851] "Failed to get status for pod" podUID="36154dea-ca68-4ca6-8e2f-83a669152ca7" pod="openshift-marketplace/redhat-marketplace-fzw5q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fzw5q\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.725114 4745 status_manager.go:851] "Failed to get status for pod" podUID="7c6f4dda-1294-4903-a4c1-6685307c3b25" pod="openshift-marketplace/certified-operators-bmx2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bmx2n\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.725483 4745 status_manager.go:851] "Failed to get status for pod" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" pod="openshift-marketplace/redhat-operators-9tkgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9tkgm\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.725721 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.726017 4745 status_manager.go:851] "Failed to get status for pod" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.726247 4745 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:05 crc kubenswrapper[4745]: I0127 12:17:05.726515 4745 status_manager.go:851] "Failed to get status for pod" podUID="6d114857-b077-4798-b578-b9a15645d31f" pod="openshift-marketplace/redhat-marketplace-hw272" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hw272\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.732270 4745 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="a694fbe9d52a2e69832c5349c91f5d57e2d8fa622cff24ee793e2fa1000eac8e" exitCode=0 Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.732415 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"a694fbe9d52a2e69832c5349c91f5d57e2d8fa622cff24ee793e2fa1000eac8e"} Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.732665 4745 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c4c7cda7-14d9-4e22-82b9-f36bda68c36a" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.732893 4745 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c4c7cda7-14d9-4e22-82b9-f36bda68c36a" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.733498 4745 status_manager.go:851] "Failed to get status for pod" podUID="64c43381-42e2-4e01-9559-70c3c56070ea" pod="openshift-marketplace/certified-operators-255wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-255wr\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: E0127 12:17:06.733582 4745 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.734053 4745 status_manager.go:851] "Failed to get status for pod" podUID="36154dea-ca68-4ca6-8e2f-83a669152ca7" pod="openshift-marketplace/redhat-marketplace-fzw5q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fzw5q\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.734544 4745 status_manager.go:851] "Failed to get status for pod" podUID="7c6f4dda-1294-4903-a4c1-6685307c3b25" pod="openshift-marketplace/certified-operators-bmx2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bmx2n\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.734975 4745 status_manager.go:851] "Failed to get status for pod" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" pod="openshift-marketplace/redhat-operators-9tkgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9tkgm\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.735467 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.735872 4745 status_manager.go:851] "Failed to get status for pod" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.736482 4745 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.736987 4745 status_manager.go:851] "Failed to get status for pod" podUID="6d114857-b077-4798-b578-b9a15645d31f" pod="openshift-marketplace/redhat-marketplace-hw272" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hw272\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.738031 4745 status_manager.go:851] "Failed to get status for pod" podUID="341a8942-834f-4f76-8269-7ecdecaaa1b0" pod="openshift-marketplace/community-operators-tzw6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tzw6b\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.738702 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-255wr" event={"ID":"64c43381-42e2-4e01-9559-70c3c56070ea","Type":"ContainerStarted","Data":"5001560d8ba03714a647addf077fe97b8b9d85a7595a322d09b822b0ae7693b0"} Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.745924 4745 generic.go:334] "Generic (PLEG): container finished" podID="3fcec544-9ef8-406d-9f01-b3ceabf2b033" containerID="63a54136d8fcb6b44e19c902c273c70c867dd941be0ee34492d3074359537ab4" exitCode=0 Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.746026 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c9wm" event={"ID":"3fcec544-9ef8-406d-9f01-b3ceabf2b033","Type":"ContainerDied","Data":"63a54136d8fcb6b44e19c902c273c70c867dd941be0ee34492d3074359537ab4"} Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.747157 4745 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.747911 4745 status_manager.go:851] "Failed to get status for pod" podUID="6d114857-b077-4798-b578-b9a15645d31f" pod="openshift-marketplace/redhat-marketplace-hw272" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hw272\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.748273 4745 status_manager.go:851] "Failed to get status for pod" podUID="341a8942-834f-4f76-8269-7ecdecaaa1b0" pod="openshift-marketplace/community-operators-tzw6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tzw6b\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.748524 4745 generic.go:334] "Generic (PLEG): container finished" podID="d2b41701-5113-4970-8d93-157bf16b3c06" containerID="c9845d62431f87b97a415efa7aa9aefa2b825cd0cc760c633bd9da5bfe028a63" exitCode=0 Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.748626 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhnbq" event={"ID":"d2b41701-5113-4970-8d93-157bf16b3c06","Type":"ContainerDied","Data":"c9845d62431f87b97a415efa7aa9aefa2b825cd0cc760c633bd9da5bfe028a63"} Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.748748 4745 status_manager.go:851] "Failed to get status for pod" podUID="64c43381-42e2-4e01-9559-70c3c56070ea" pod="openshift-marketplace/certified-operators-255wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-255wr\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.749012 4745 status_manager.go:851] "Failed to get status for pod" podUID="36154dea-ca68-4ca6-8e2f-83a669152ca7" pod="openshift-marketplace/redhat-marketplace-fzw5q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fzw5q\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.749346 4745 status_manager.go:851] "Failed to get status for pod" podUID="7c6f4dda-1294-4903-a4c1-6685307c3b25" pod="openshift-marketplace/certified-operators-bmx2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bmx2n\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.749631 4745 status_manager.go:851] "Failed to get status for pod" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" pod="openshift-marketplace/redhat-operators-9tkgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9tkgm\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.749909 4745 status_manager.go:851] "Failed to get status for pod" podUID="3fcec544-9ef8-406d-9f01-b3ceabf2b033" pod="openshift-marketplace/redhat-operators-2c9wm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2c9wm\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.750195 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.750468 4745 status_manager.go:851] "Failed to get status for pod" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.750891 4745 status_manager.go:851] "Failed to get status for pod" podUID="6d114857-b077-4798-b578-b9a15645d31f" pod="openshift-marketplace/redhat-marketplace-hw272" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hw272\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.751110 4745 generic.go:334] "Generic (PLEG): container finished" podID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" containerID="08efc721326ad7b1afe48e8eebdf9c75e8b377863f7f8e6ceb30bcd2332d42a9" exitCode=0 Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.751199 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tkgm" event={"ID":"7ff89667-3b76-4571-a07b-d43bce0a2e5b","Type":"ContainerDied","Data":"08efc721326ad7b1afe48e8eebdf9c75e8b377863f7f8e6ceb30bcd2332d42a9"} Jan 27 12:17:06 crc kubenswrapper[4745]: E0127 12:17:06.751560 4745 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.233:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.751703 4745 status_manager.go:851] "Failed to get status for pod" podUID="341a8942-834f-4f76-8269-7ecdecaaa1b0" pod="openshift-marketplace/community-operators-tzw6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tzw6b\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.752223 4745 status_manager.go:851] "Failed to get status for pod" podUID="d2b41701-5113-4970-8d93-157bf16b3c06" pod="openshift-marketplace/community-operators-zhnbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zhnbq\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.752480 4745 status_manager.go:851] "Failed to get status for pod" podUID="64c43381-42e2-4e01-9559-70c3c56070ea" pod="openshift-marketplace/certified-operators-255wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-255wr\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.752855 4745 status_manager.go:851] "Failed to get status for pod" podUID="36154dea-ca68-4ca6-8e2f-83a669152ca7" pod="openshift-marketplace/redhat-marketplace-fzw5q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fzw5q\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.753364 4745 status_manager.go:851] "Failed to get status for pod" podUID="7c6f4dda-1294-4903-a4c1-6685307c3b25" pod="openshift-marketplace/certified-operators-bmx2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bmx2n\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.753648 4745 status_manager.go:851] "Failed to get status for pod" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" pod="openshift-marketplace/redhat-operators-9tkgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9tkgm\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.753961 4745 status_manager.go:851] "Failed to get status for pod" podUID="3fcec544-9ef8-406d-9f01-b3ceabf2b033" pod="openshift-marketplace/redhat-operators-2c9wm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2c9wm\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.754380 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.754775 4745 status_manager.go:851] "Failed to get status for pod" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.755047 4745 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.755474 4745 status_manager.go:851] "Failed to get status for pod" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.755873 4745 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.756107 4745 status_manager.go:851] "Failed to get status for pod" podUID="6d114857-b077-4798-b578-b9a15645d31f" pod="openshift-marketplace/redhat-marketplace-hw272" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hw272\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.756494 4745 status_manager.go:851] "Failed to get status for pod" podUID="341a8942-834f-4f76-8269-7ecdecaaa1b0" pod="openshift-marketplace/community-operators-tzw6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tzw6b\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.757234 4745 status_manager.go:851] "Failed to get status for pod" podUID="d2b41701-5113-4970-8d93-157bf16b3c06" pod="openshift-marketplace/community-operators-zhnbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zhnbq\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.757751 4745 status_manager.go:851] "Failed to get status for pod" podUID="64c43381-42e2-4e01-9559-70c3c56070ea" pod="openshift-marketplace/certified-operators-255wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-255wr\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.758098 4745 status_manager.go:851] "Failed to get status for pod" podUID="36154dea-ca68-4ca6-8e2f-83a669152ca7" pod="openshift-marketplace/redhat-marketplace-fzw5q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fzw5q\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.758462 4745 status_manager.go:851] "Failed to get status for pod" podUID="7c6f4dda-1294-4903-a4c1-6685307c3b25" pod="openshift-marketplace/certified-operators-bmx2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bmx2n\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.759440 4745 status_manager.go:851] "Failed to get status for pod" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" pod="openshift-marketplace/redhat-operators-9tkgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9tkgm\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.760379 4745 status_manager.go:851] "Failed to get status for pod" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" pod="openshift-kube-apiserver/revision-pruner-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-9-crc\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:06 crc kubenswrapper[4745]: I0127 12:17:06.760711 4745 status_manager.go:851] "Failed to get status for pod" podUID="3fcec544-9ef8-406d-9f01-b3ceabf2b033" pod="openshift-marketplace/redhat-operators-2c9wm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2c9wm\": dial tcp 38.129.56.233:6443: connect: connection refused" Jan 27 12:17:07 crc kubenswrapper[4745]: I0127 12:17:07.728214 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 12:17:07 crc kubenswrapper[4745]: I0127 12:17:07.758564 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmx2n" event={"ID":"7c6f4dda-1294-4903-a4c1-6685307c3b25","Type":"ContainerStarted","Data":"bd5491f7f0e459da8a3dbed2aac2a653309e10eea252f2f2ba2907a23f2c904e"} Jan 27 12:17:07 crc kubenswrapper[4745]: I0127 12:17:07.760250 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bff6beadc2e107ab59bbfcd47cf6d897974867ebe441094e01fb1fe522128073"} Jan 27 12:17:08 crc kubenswrapper[4745]: I0127 12:17:08.766364 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hw272" event={"ID":"6d114857-b077-4798-b578-b9a15645d31f","Type":"ContainerStarted","Data":"68db3ea37b482395445dd0ac418e32ce5fddec7cf59150d5fa43cfaa1ce4b73b"} Jan 27 12:17:08 crc kubenswrapper[4745]: I0127 12:17:08.768521 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fcfa72feb05d85f66933ecf80314e461fd9801569d71fe6675d27a8ae0d79892"} Jan 27 12:17:10 crc kubenswrapper[4745]: I0127 12:17:10.441781 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 12:17:10 crc kubenswrapper[4745]: I0127 12:17:10.448008 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 12:17:10 crc kubenswrapper[4745]: I0127 12:17:10.782789 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"28495dd4676ddde435133b3b9086cf128ac23a21998d50e4047b27ab8f56c957"} Jan 27 12:17:11 crc kubenswrapper[4745]: I0127 12:17:11.792328 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzw6b" event={"ID":"341a8942-834f-4f76-8269-7ecdecaaa1b0","Type":"ContainerStarted","Data":"b0916971a50047bc3ecf82a5e73970103b735a529c0ef23324cbb90cbed42099"} Jan 27 12:17:14 crc kubenswrapper[4745]: I0127 12:17:14.922081 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bmx2n" Jan 27 12:17:14 crc kubenswrapper[4745]: I0127 12:17:14.922494 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bmx2n" Jan 27 12:17:15 crc kubenswrapper[4745]: I0127 12:17:15.297221 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-255wr" Jan 27 12:17:15 crc kubenswrapper[4745]: I0127 12:17:15.297309 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-255wr" Jan 27 12:17:15 crc kubenswrapper[4745]: I0127 12:17:15.520230 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-255wr" Jan 27 12:17:15 crc kubenswrapper[4745]: I0127 12:17:15.526365 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bmx2n" Jan 27 12:17:15 crc kubenswrapper[4745]: I0127 12:17:15.861700 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-255wr" Jan 27 12:17:15 crc kubenswrapper[4745]: I0127 12:17:15.872262 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bmx2n" Jan 27 12:17:17 crc kubenswrapper[4745]: I0127 12:17:17.083206 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hw272" Jan 27 12:17:17 crc kubenswrapper[4745]: I0127 12:17:17.084092 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hw272" Jan 27 12:17:17 crc kubenswrapper[4745]: I0127 12:17:17.169396 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hw272" Jan 27 12:17:17 crc kubenswrapper[4745]: I0127 12:17:17.734411 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 12:17:17 crc kubenswrapper[4745]: I0127 12:17:17.829485 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzw5q" event={"ID":"36154dea-ca68-4ca6-8e2f-83a669152ca7","Type":"ContainerStarted","Data":"779d76e62c771a713509ec5c5a3052c6b04e91e24d02a597d523df0699e690a0"} Jan 27 12:17:17 crc kubenswrapper[4745]: I0127 12:17:17.834196 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"36fe0eb15cae30625d395410ff5fad920a2c546ba0c4fed2c021da956dfc1c27"} Jan 27 12:17:17 crc kubenswrapper[4745]: I0127 12:17:17.893655 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hw272" Jan 27 12:17:24 crc kubenswrapper[4745]: I0127 12:17:24.179727 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 12:17:24 crc kubenswrapper[4745]: I0127 12:17:24.557928 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 12:17:24 crc kubenswrapper[4745]: I0127 12:17:24.755727 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 12:17:25 crc kubenswrapper[4745]: I0127 12:17:25.096191 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tzw6b" Jan 27 12:17:25 crc kubenswrapper[4745]: I0127 12:17:25.096540 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tzw6b" Jan 27 12:17:25 crc kubenswrapper[4745]: I0127 12:17:25.131986 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tzw6b" Jan 27 12:17:25 crc kubenswrapper[4745]: I0127 12:17:25.683325 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 12:17:25 crc kubenswrapper[4745]: I0127 12:17:25.842451 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 12:17:25 crc kubenswrapper[4745]: I0127 12:17:25.921717 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tzw6b" Jan 27 12:17:26 crc kubenswrapper[4745]: I0127 12:17:26.107086 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 12:17:26 crc kubenswrapper[4745]: I0127 12:17:26.845591 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 12:17:26 crc kubenswrapper[4745]: I0127 12:17:26.872667 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 12:17:26 crc kubenswrapper[4745]: I0127 12:17:26.992777 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 12:17:27 crc kubenswrapper[4745]: I0127 12:17:27.003223 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 12:17:27 crc kubenswrapper[4745]: I0127 12:17:27.380281 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 12:17:27 crc kubenswrapper[4745]: I0127 12:17:27.493624 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fzw5q" Jan 27 12:17:27 crc kubenswrapper[4745]: I0127 12:17:27.512923 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fzw5q" Jan 27 12:17:27 crc kubenswrapper[4745]: I0127 12:17:27.551734 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 12:17:27 crc kubenswrapper[4745]: I0127 12:17:27.556606 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fzw5q" Jan 27 12:17:27 crc kubenswrapper[4745]: I0127 12:17:27.932187 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fzw5q" Jan 27 12:17:27 crc kubenswrapper[4745]: I0127 12:17:27.950049 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 12:17:27 crc kubenswrapper[4745]: I0127 12:17:27.997008 4745 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 12:17:28 crc kubenswrapper[4745]: I0127 12:17:28.224309 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 12:17:28 crc kubenswrapper[4745]: I0127 12:17:28.230654 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 12:17:28 crc kubenswrapper[4745]: I0127 12:17:28.270983 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 12:17:28 crc kubenswrapper[4745]: I0127 12:17:28.308887 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 12:17:28 crc kubenswrapper[4745]: I0127 12:17:28.335206 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 12:17:28 crc kubenswrapper[4745]: I0127 12:17:28.433282 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 12:17:28 crc kubenswrapper[4745]: I0127 12:17:28.458464 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 12:17:28 crc kubenswrapper[4745]: I0127 12:17:28.492687 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 12:17:28 crc kubenswrapper[4745]: I0127 12:17:28.581458 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 12:17:28 crc kubenswrapper[4745]: I0127 12:17:28.655620 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 12:17:28 crc kubenswrapper[4745]: I0127 12:17:28.743715 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 12:17:28 crc kubenswrapper[4745]: I0127 12:17:28.822766 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 12:17:29 crc kubenswrapper[4745]: I0127 12:17:29.162795 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 12:17:29 crc kubenswrapper[4745]: I0127 12:17:29.268152 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 12:17:29 crc kubenswrapper[4745]: I0127 12:17:29.293706 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 12:17:29 crc kubenswrapper[4745]: I0127 12:17:29.294983 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 12:17:29 crc kubenswrapper[4745]: I0127 12:17:29.475613 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 12:17:29 crc kubenswrapper[4745]: I0127 12:17:29.486399 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 12:17:29 crc kubenswrapper[4745]: I0127 12:17:29.491610 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 12:17:29 crc kubenswrapper[4745]: I0127 12:17:29.496762 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 12:17:29 crc kubenswrapper[4745]: I0127 12:17:29.518767 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 12:17:29 crc kubenswrapper[4745]: I0127 12:17:29.633012 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 12:17:29 crc kubenswrapper[4745]: I0127 12:17:29.656753 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 12:17:29 crc kubenswrapper[4745]: I0127 12:17:29.737660 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 12:17:29 crc kubenswrapper[4745]: I0127 12:17:29.740588 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 12:17:29 crc kubenswrapper[4745]: I0127 12:17:29.770231 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 12:17:29 crc kubenswrapper[4745]: I0127 12:17:29.807117 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 12:17:29 crc kubenswrapper[4745]: I0127 12:17:29.810569 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 12:17:29 crc kubenswrapper[4745]: I0127 12:17:29.879154 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 12:17:29 crc kubenswrapper[4745]: I0127 12:17:29.947352 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 12:17:29 crc kubenswrapper[4745]: I0127 12:17:29.949277 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 12:17:30 crc kubenswrapper[4745]: I0127 12:17:30.112073 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 12:17:30 crc kubenswrapper[4745]: I0127 12:17:30.294352 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 12:17:30 crc kubenswrapper[4745]: I0127 12:17:30.349137 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 12:17:30 crc kubenswrapper[4745]: I0127 12:17:30.448928 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 12:17:30 crc kubenswrapper[4745]: I0127 12:17:30.472394 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 12:17:30 crc kubenswrapper[4745]: I0127 12:17:30.482487 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 12:17:30 crc kubenswrapper[4745]: I0127 12:17:30.526929 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 12:17:30 crc kubenswrapper[4745]: I0127 12:17:30.613685 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 12:17:30 crc kubenswrapper[4745]: I0127 12:17:30.673331 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 12:17:30 crc kubenswrapper[4745]: I0127 12:17:30.674321 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 12:17:30 crc kubenswrapper[4745]: I0127 12:17:30.709930 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 12:17:30 crc kubenswrapper[4745]: I0127 12:17:30.756324 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 12:17:30 crc kubenswrapper[4745]: I0127 12:17:30.787032 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 12:17:31 crc kubenswrapper[4745]: I0127 12:17:31.175083 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 12:17:31 crc kubenswrapper[4745]: I0127 12:17:31.311493 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 12:17:31 crc kubenswrapper[4745]: I0127 12:17:31.366722 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 12:17:31 crc kubenswrapper[4745]: I0127 12:17:31.523856 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 12:17:31 crc kubenswrapper[4745]: I0127 12:17:31.532121 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 12:17:31 crc kubenswrapper[4745]: I0127 12:17:31.651505 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 12:17:31 crc kubenswrapper[4745]: I0127 12:17:31.667764 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 12:17:31 crc kubenswrapper[4745]: I0127 12:17:31.915448 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d5e042a657c958dd384a7043bc0122976aab0b043dfd16e11d4ff1d9c378be6e"} Jan 27 12:17:31 crc kubenswrapper[4745]: I0127 12:17:31.918265 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhnbq" event={"ID":"d2b41701-5113-4970-8d93-157bf16b3c06","Type":"ContainerStarted","Data":"cd15586e1d10c05eef5d0049e00af975204f6a2fa477e171c9dd2d6a8af3e157"} Jan 27 12:17:32 crc kubenswrapper[4745]: I0127 12:17:32.078209 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 12:17:32 crc kubenswrapper[4745]: I0127 12:17:32.250479 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 12:17:32 crc kubenswrapper[4745]: I0127 12:17:32.259673 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 12:17:32 crc kubenswrapper[4745]: I0127 12:17:32.711083 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 12:17:32 crc kubenswrapper[4745]: I0127 12:17:32.757200 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 12:17:32 crc kubenswrapper[4745]: I0127 12:17:32.797444 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 12:17:32 crc kubenswrapper[4745]: I0127 12:17:32.804282 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 12:17:32 crc kubenswrapper[4745]: I0127 12:17:32.915957 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 12:17:32 crc kubenswrapper[4745]: I0127 12:17:32.925478 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c9wm" event={"ID":"3fcec544-9ef8-406d-9f01-b3ceabf2b033","Type":"ContainerStarted","Data":"4d7d03cea1849e923194b74bf1a85ac962a1c60eafcee366a949ff7005ab9c8a"} Jan 27 12:17:32 crc kubenswrapper[4745]: I0127 12:17:32.928337 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tkgm" event={"ID":"7ff89667-3b76-4571-a07b-d43bce0a2e5b","Type":"ContainerStarted","Data":"d3a160686ccda655c81e74bd9d33a37463d91ee5e8ab70e9a12d11197101634e"} Jan 27 12:17:32 crc kubenswrapper[4745]: I0127 12:17:32.928621 4745 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c4c7cda7-14d9-4e22-82b9-f36bda68c36a" Jan 27 12:17:32 crc kubenswrapper[4745]: I0127 12:17:32.928643 4745 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c4c7cda7-14d9-4e22-82b9-f36bda68c36a" Jan 27 12:17:32 crc kubenswrapper[4745]: I0127 12:17:32.928879 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:17:32 crc kubenswrapper[4745]: I0127 12:17:32.941207 4745 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:17:32 crc kubenswrapper[4745]: I0127 12:17:32.971667 4745 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="78808ba5-134b-4af0-8662-203bcbfc685d" Jan 27 12:17:32 crc kubenswrapper[4745]: I0127 12:17:32.984718 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 12:17:33 crc kubenswrapper[4745]: I0127 12:17:33.061997 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 12:17:33 crc kubenswrapper[4745]: I0127 12:17:33.288415 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 12:17:33 crc kubenswrapper[4745]: I0127 12:17:33.293358 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 12:17:33 crc kubenswrapper[4745]: I0127 12:17:33.383005 4745 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 12:17:33 crc kubenswrapper[4745]: I0127 12:17:33.397186 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 12:17:33 crc kubenswrapper[4745]: I0127 12:17:33.467911 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 12:17:33 crc kubenswrapper[4745]: I0127 12:17:33.636077 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 12:17:33 crc kubenswrapper[4745]: I0127 12:17:33.679058 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 12:17:33 crc kubenswrapper[4745]: I0127 12:17:33.837700 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 12:17:33 crc kubenswrapper[4745]: I0127 12:17:33.891370 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 12:17:33 crc kubenswrapper[4745]: I0127 12:17:33.933544 4745 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c4c7cda7-14d9-4e22-82b9-f36bda68c36a" Jan 27 12:17:33 crc kubenswrapper[4745]: I0127 12:17:33.933572 4745 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c4c7cda7-14d9-4e22-82b9-f36bda68c36a" Jan 27 12:17:34 crc kubenswrapper[4745]: I0127 12:17:34.053491 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 12:17:34 crc kubenswrapper[4745]: I0127 12:17:34.155856 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 12:17:34 crc kubenswrapper[4745]: I0127 12:17:34.158244 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 12:17:34 crc kubenswrapper[4745]: I0127 12:17:34.161900 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 12:17:34 crc kubenswrapper[4745]: I0127 12:17:34.347422 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 12:17:34 crc kubenswrapper[4745]: I0127 12:17:34.480191 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 12:17:34 crc kubenswrapper[4745]: I0127 12:17:34.511685 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 12:17:34 crc kubenswrapper[4745]: I0127 12:17:34.789310 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 12:17:35 crc kubenswrapper[4745]: I0127 12:17:35.051862 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 12:17:35 crc kubenswrapper[4745]: I0127 12:17:35.062733 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 12:17:35 crc kubenswrapper[4745]: I0127 12:17:35.186248 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 12:17:35 crc kubenswrapper[4745]: I0127 12:17:35.223602 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 12:17:35 crc kubenswrapper[4745]: I0127 12:17:35.380769 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 12:17:35 crc kubenswrapper[4745]: I0127 12:17:35.406684 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 12:17:35 crc kubenswrapper[4745]: I0127 12:17:35.489141 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zhnbq" Jan 27 12:17:35 crc kubenswrapper[4745]: I0127 12:17:35.489181 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zhnbq" Jan 27 12:17:35 crc kubenswrapper[4745]: I0127 12:17:35.497254 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 12:17:35 crc kubenswrapper[4745]: I0127 12:17:35.529394 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zhnbq" Jan 27 12:17:35 crc kubenswrapper[4745]: I0127 12:17:35.689208 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 12:17:35 crc kubenswrapper[4745]: I0127 12:17:35.696487 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 12:17:35 crc kubenswrapper[4745]: I0127 12:17:35.759625 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 12:17:35 crc kubenswrapper[4745]: I0127 12:17:35.762710 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 12:17:35 crc kubenswrapper[4745]: I0127 12:17:35.780187 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 12:17:35 crc kubenswrapper[4745]: I0127 12:17:35.876618 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 12:17:35 crc kubenswrapper[4745]: I0127 12:17:35.902694 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 12:17:35 crc kubenswrapper[4745]: I0127 12:17:35.996495 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 12:17:36 crc kubenswrapper[4745]: I0127 12:17:36.058905 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 12:17:36 crc kubenswrapper[4745]: I0127 12:17:36.098718 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:17:36 crc kubenswrapper[4745]: I0127 12:17:36.098792 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:17:36 crc kubenswrapper[4745]: I0127 12:17:36.099251 4745 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c4c7cda7-14d9-4e22-82b9-f36bda68c36a" Jan 27 12:17:36 crc kubenswrapper[4745]: I0127 12:17:36.099281 4745 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c4c7cda7-14d9-4e22-82b9-f36bda68c36a" Jan 27 12:17:36 crc kubenswrapper[4745]: I0127 12:17:36.103346 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:17:36 crc kubenswrapper[4745]: I0127 12:17:36.165949 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 12:17:36 crc kubenswrapper[4745]: I0127 12:17:36.310084 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 12:17:36 crc kubenswrapper[4745]: I0127 12:17:36.349897 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 12:17:36 crc kubenswrapper[4745]: I0127 12:17:36.953644 4745 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c4c7cda7-14d9-4e22-82b9-f36bda68c36a" Jan 27 12:17:36 crc kubenswrapper[4745]: I0127 12:17:36.953679 4745 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c4c7cda7-14d9-4e22-82b9-f36bda68c36a" Jan 27 12:17:36 crc kubenswrapper[4745]: I0127 12:17:36.957603 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:17:36 crc kubenswrapper[4745]: I0127 12:17:36.959653 4745 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="78808ba5-134b-4af0-8662-203bcbfc685d" Jan 27 12:17:37 crc kubenswrapper[4745]: I0127 12:17:37.126305 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 12:17:37 crc kubenswrapper[4745]: I0127 12:17:37.395851 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 12:17:37 crc kubenswrapper[4745]: I0127 12:17:37.440477 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 12:17:37 crc kubenswrapper[4745]: I0127 12:17:37.570219 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 12:17:37 crc kubenswrapper[4745]: I0127 12:17:37.577557 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 12:17:37 crc kubenswrapper[4745]: I0127 12:17:37.674323 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 12:17:37 crc kubenswrapper[4745]: I0127 12:17:37.754661 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 12:17:37 crc kubenswrapper[4745]: I0127 12:17:37.938240 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 12:17:37 crc kubenswrapper[4745]: I0127 12:17:37.959142 4745 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c4c7cda7-14d9-4e22-82b9-f36bda68c36a" Jan 27 12:17:37 crc kubenswrapper[4745]: I0127 12:17:37.959173 4745 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c4c7cda7-14d9-4e22-82b9-f36bda68c36a" Jan 27 12:17:38 crc kubenswrapper[4745]: I0127 12:17:38.003691 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 12:17:38 crc kubenswrapper[4745]: I0127 12:17:38.006506 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 12:17:38 crc kubenswrapper[4745]: I0127 12:17:38.067593 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 12:17:38 crc kubenswrapper[4745]: I0127 12:17:38.116760 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9tkgm" Jan 27 12:17:38 crc kubenswrapper[4745]: I0127 12:17:38.116864 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9tkgm" Jan 27 12:17:38 crc kubenswrapper[4745]: I0127 12:17:38.131544 4745 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="78808ba5-134b-4af0-8662-203bcbfc685d" Jan 27 12:17:38 crc kubenswrapper[4745]: I0127 12:17:38.278321 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 12:17:38 crc kubenswrapper[4745]: I0127 12:17:38.416864 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 12:17:38 crc kubenswrapper[4745]: I0127 12:17:38.481731 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 12:17:38 crc kubenswrapper[4745]: I0127 12:17:38.508119 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2c9wm" Jan 27 12:17:38 crc kubenswrapper[4745]: I0127 12:17:38.508251 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2c9wm" Jan 27 12:17:38 crc kubenswrapper[4745]: I0127 12:17:38.551753 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2c9wm" Jan 27 12:17:38 crc kubenswrapper[4745]: I0127 12:17:38.654513 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 12:17:38 crc kubenswrapper[4745]: I0127 12:17:38.741283 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 12:17:38 crc kubenswrapper[4745]: I0127 12:17:38.889162 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 12:17:39 crc kubenswrapper[4745]: I0127 12:17:39.025118 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 12:17:39 crc kubenswrapper[4745]: I0127 12:17:39.026366 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2c9wm" Jan 27 12:17:39 crc kubenswrapper[4745]: I0127 12:17:39.155764 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9tkgm" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" containerName="registry-server" probeResult="failure" output=< Jan 27 12:17:39 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 27 12:17:39 crc kubenswrapper[4745]: > Jan 27 12:17:39 crc kubenswrapper[4745]: I0127 12:17:39.318264 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 12:17:39 crc kubenswrapper[4745]: I0127 12:17:39.566184 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 12:17:39 crc kubenswrapper[4745]: I0127 12:17:39.681290 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 12:17:39 crc kubenswrapper[4745]: I0127 12:17:39.977200 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 12:17:45 crc kubenswrapper[4745]: I0127 12:17:45.528113 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zhnbq" Jan 27 12:17:45 crc kubenswrapper[4745]: I0127 12:17:45.842516 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 12:17:46 crc kubenswrapper[4745]: I0127 12:17:46.367457 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 12:17:47 crc kubenswrapper[4745]: I0127 12:17:47.482609 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 12:17:48 crc kubenswrapper[4745]: I0127 12:17:48.131240 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9tkgm" Jan 27 12:17:48 crc kubenswrapper[4745]: I0127 12:17:48.171017 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9tkgm" Jan 27 12:17:48 crc kubenswrapper[4745]: I0127 12:17:48.503319 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 12:17:48 crc kubenswrapper[4745]: I0127 12:17:48.514319 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 12:17:50 crc kubenswrapper[4745]: I0127 12:17:50.385794 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 12:17:50 crc kubenswrapper[4745]: I0127 12:17:50.578856 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 12:17:50 crc kubenswrapper[4745]: I0127 12:17:50.676555 4745 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 12:17:50 crc kubenswrapper[4745]: I0127 12:17:50.677373 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9tkgm" podStartSLOduration=26.091125171 podStartE2EDuration="3m13.677352734s" podCreationTimestamp="2026-01-27 12:14:37 +0000 UTC" firstStartedPulling="2026-01-27 12:14:42.592322228 +0000 UTC m=+175.397232916" lastFinishedPulling="2026-01-27 12:17:30.178549781 +0000 UTC m=+342.983460479" observedRunningTime="2026-01-27 12:17:33.949369872 +0000 UTC m=+346.754280570" watchObservedRunningTime="2026-01-27 12:17:50.677352734 +0000 UTC m=+363.482263422" Jan 27 12:17:50 crc kubenswrapper[4745]: I0127 12:17:50.677725 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zhnbq" podStartSLOduration=29.810106395 podStartE2EDuration="3m15.677698074s" podCreationTimestamp="2026-01-27 12:14:35 +0000 UTC" firstStartedPulling="2026-01-27 12:14:40.572686158 +0000 UTC m=+173.377596846" lastFinishedPulling="2026-01-27 12:17:26.440277817 +0000 UTC m=+339.245188525" observedRunningTime="2026-01-27 12:17:32.968208837 +0000 UTC m=+345.773119525" watchObservedRunningTime="2026-01-27 12:17:50.677698074 +0000 UTC m=+363.482608762" Jan 27 12:17:50 crc kubenswrapper[4745]: I0127 12:17:50.677848 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fzw5q" podStartSLOduration=39.776507 podStartE2EDuration="3m13.677844238s" podCreationTimestamp="2026-01-27 12:14:37 +0000 UTC" firstStartedPulling="2026-01-27 12:14:42.59237057 +0000 UTC m=+175.397281258" lastFinishedPulling="2026-01-27 12:17:16.493707818 +0000 UTC m=+329.298618496" observedRunningTime="2026-01-27 12:17:17.857002126 +0000 UTC m=+330.661912824" watchObservedRunningTime="2026-01-27 12:17:50.677844238 +0000 UTC m=+363.482754926" Jan 27 12:17:50 crc kubenswrapper[4745]: I0127 12:17:50.677955 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hw272" podStartSLOduration=47.199015926 podStartE2EDuration="3m14.677950261s" podCreationTimestamp="2026-01-27 12:14:36 +0000 UTC" firstStartedPulling="2026-01-27 12:14:40.573231013 +0000 UTC m=+173.378141701" lastFinishedPulling="2026-01-27 12:17:08.052165348 +0000 UTC m=+320.857076036" observedRunningTime="2026-01-27 12:17:14.329157776 +0000 UTC m=+327.134068464" watchObservedRunningTime="2026-01-27 12:17:50.677950261 +0000 UTC m=+363.482860949" Jan 27 12:17:50 crc kubenswrapper[4745]: I0127 12:17:50.679292 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-255wr" podStartSLOduration=49.757648188 podStartE2EDuration="3m16.67928174s" podCreationTimestamp="2026-01-27 12:14:34 +0000 UTC" firstStartedPulling="2026-01-27 12:14:39.558683392 +0000 UTC m=+172.363594090" lastFinishedPulling="2026-01-27 12:17:06.480316954 +0000 UTC m=+319.285227642" observedRunningTime="2026-01-27 12:17:14.351760605 +0000 UTC m=+327.156671313" watchObservedRunningTime="2026-01-27 12:17:50.67928174 +0000 UTC m=+363.484192438" Jan 27 12:17:50 crc kubenswrapper[4745]: I0127 12:17:50.679419 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bmx2n" podStartSLOduration=49.220011098 podStartE2EDuration="3m16.679412673s" podCreationTimestamp="2026-01-27 12:14:34 +0000 UTC" firstStartedPulling="2026-01-27 12:14:39.537473522 +0000 UTC m=+172.342384210" lastFinishedPulling="2026-01-27 12:17:06.996875097 +0000 UTC m=+319.801785785" observedRunningTime="2026-01-27 12:17:14.236761582 +0000 UTC m=+327.041672280" watchObservedRunningTime="2026-01-27 12:17:50.679412673 +0000 UTC m=+363.484323371" Jan 27 12:17:50 crc kubenswrapper[4745]: I0127 12:17:50.683194 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2c9wm" podStartSLOduration=26.094169085 podStartE2EDuration="3m12.683181352s" podCreationTimestamp="2026-01-27 12:14:38 +0000 UTC" firstStartedPulling="2026-01-27 12:14:43.602076833 +0000 UTC m=+176.406987521" lastFinishedPulling="2026-01-27 12:17:30.1910891 +0000 UTC m=+342.995999788" observedRunningTime="2026-01-27 12:17:32.949676682 +0000 UTC m=+345.754587370" watchObservedRunningTime="2026-01-27 12:17:50.683181352 +0000 UTC m=+363.488092040" Jan 27 12:17:50 crc kubenswrapper[4745]: I0127 12:17:50.683489 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tzw6b" podStartSLOduration=48.452683231 podStartE2EDuration="3m16.683482661s" podCreationTimestamp="2026-01-27 12:14:34 +0000 UTC" firstStartedPulling="2026-01-27 12:14:40.572142702 +0000 UTC m=+173.377053390" lastFinishedPulling="2026-01-27 12:17:08.802942132 +0000 UTC m=+321.607852820" observedRunningTime="2026-01-27 12:17:15.834265974 +0000 UTC m=+328.639176662" watchObservedRunningTime="2026-01-27 12:17:50.683482661 +0000 UTC m=+363.488393349" Jan 27 12:17:50 crc kubenswrapper[4745]: I0127 12:17:50.684253 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 12:17:50 crc kubenswrapper[4745]: I0127 12:17:50.684294 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 12:17:50 crc kubenswrapper[4745]: I0127 12:17:50.691005 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 12:17:50 crc kubenswrapper[4745]: I0127 12:17:50.706094 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.706076384 podStartE2EDuration="18.706076384s" podCreationTimestamp="2026-01-27 12:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:17:50.702173431 +0000 UTC m=+363.507084119" watchObservedRunningTime="2026-01-27 12:17:50.706076384 +0000 UTC m=+363.510987082" Jan 27 12:17:50 crc kubenswrapper[4745]: I0127 12:17:50.958946 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 12:17:51 crc kubenswrapper[4745]: I0127 12:17:51.443515 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 12:17:51 crc kubenswrapper[4745]: I0127 12:17:51.502693 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 12:17:52 crc kubenswrapper[4745]: I0127 12:17:52.046766 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 12:17:52 crc kubenswrapper[4745]: I0127 12:17:52.321884 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 12:17:52 crc kubenswrapper[4745]: I0127 12:17:52.328619 4745 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 12:17:52 crc kubenswrapper[4745]: I0127 12:17:52.582416 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 12:17:52 crc kubenswrapper[4745]: I0127 12:17:52.653169 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 12:17:53 crc kubenswrapper[4745]: I0127 12:17:53.164282 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 12:17:53 crc kubenswrapper[4745]: I0127 12:17:53.386302 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 12:17:53 crc kubenswrapper[4745]: I0127 12:17:53.583173 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 12:17:53 crc kubenswrapper[4745]: I0127 12:17:53.651489 4745 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 12:17:53 crc kubenswrapper[4745]: I0127 12:17:53.889880 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 12:17:54 crc kubenswrapper[4745]: I0127 12:17:54.120532 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 12:17:54 crc kubenswrapper[4745]: I0127 12:17:54.690953 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 12:17:55 crc kubenswrapper[4745]: I0127 12:17:55.171536 4745 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 12:17:55 crc kubenswrapper[4745]: I0127 12:17:55.172600 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://2aa387d05eb73f5ff660492280b771a6e6bf0d682704cf2c55afc23f19fa08fc" gracePeriod=5 Jan 27 12:17:55 crc kubenswrapper[4745]: I0127 12:17:55.584372 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 12:17:56 crc kubenswrapper[4745]: I0127 12:17:56.075655 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 12:17:56 crc kubenswrapper[4745]: I0127 12:17:56.434274 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 12:17:56 crc kubenswrapper[4745]: I0127 12:17:56.538780 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 12:17:56 crc kubenswrapper[4745]: I0127 12:17:56.685229 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 12:17:56 crc kubenswrapper[4745]: I0127 12:17:56.916930 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 12:17:57 crc kubenswrapper[4745]: I0127 12:17:57.522003 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 12:17:57 crc kubenswrapper[4745]: I0127 12:17:57.826035 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 12:17:58 crc kubenswrapper[4745]: I0127 12:17:58.104027 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 12:17:58 crc kubenswrapper[4745]: I0127 12:17:58.435031 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 12:17:58 crc kubenswrapper[4745]: I0127 12:17:58.687271 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 12:17:59 crc kubenswrapper[4745]: I0127 12:17:59.576200 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 12:17:59 crc kubenswrapper[4745]: I0127 12:17:59.758225 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 12:17:59 crc kubenswrapper[4745]: I0127 12:17:59.972890 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 12:18:00 crc kubenswrapper[4745]: I0127 12:18:00.059715 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 12:18:00 crc kubenswrapper[4745]: I0127 12:18:00.440468 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 12:18:00 crc kubenswrapper[4745]: I0127 12:18:00.825040 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 12:18:00 crc kubenswrapper[4745]: I0127 12:18:00.825109 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 12:18:00 crc kubenswrapper[4745]: I0127 12:18:00.959201 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 12:18:00 crc kubenswrapper[4745]: I0127 12:18:00.959511 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 12:18:00 crc kubenswrapper[4745]: I0127 12:18:00.959656 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 12:18:00 crc kubenswrapper[4745]: I0127 12:18:00.959767 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 12:18:00 crc kubenswrapper[4745]: I0127 12:18:00.959323 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:18:00 crc kubenswrapper[4745]: I0127 12:18:00.959568 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:18:00 crc kubenswrapper[4745]: I0127 12:18:00.959710 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:18:00 crc kubenswrapper[4745]: I0127 12:18:00.959862 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:18:00 crc kubenswrapper[4745]: I0127 12:18:00.959967 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 12:18:00 crc kubenswrapper[4745]: I0127 12:18:00.960541 4745 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:00 crc kubenswrapper[4745]: I0127 12:18:00.960636 4745 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:00 crc kubenswrapper[4745]: I0127 12:18:00.960725 4745 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:00 crc kubenswrapper[4745]: I0127 12:18:00.960834 4745 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:00 crc kubenswrapper[4745]: I0127 12:18:00.968564 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:18:00 crc kubenswrapper[4745]: I0127 12:18:00.972151 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 12:18:01 crc kubenswrapper[4745]: I0127 12:18:01.008461 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 12:18:01 crc kubenswrapper[4745]: I0127 12:18:01.058201 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 12:18:01 crc kubenswrapper[4745]: I0127 12:18:01.061897 4745 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:01 crc kubenswrapper[4745]: I0127 12:18:01.081308 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 12:18:01 crc kubenswrapper[4745]: I0127 12:18:01.094991 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 12:18:01 crc kubenswrapper[4745]: I0127 12:18:01.095036 4745 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="2aa387d05eb73f5ff660492280b771a6e6bf0d682704cf2c55afc23f19fa08fc" exitCode=137 Jan 27 12:18:01 crc kubenswrapper[4745]: I0127 12:18:01.095074 4745 scope.go:117] "RemoveContainer" containerID="2aa387d05eb73f5ff660492280b771a6e6bf0d682704cf2c55afc23f19fa08fc" Jan 27 12:18:01 crc kubenswrapper[4745]: I0127 12:18:01.095141 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 12:18:01 crc kubenswrapper[4745]: I0127 12:18:01.112771 4745 scope.go:117] "RemoveContainer" containerID="2aa387d05eb73f5ff660492280b771a6e6bf0d682704cf2c55afc23f19fa08fc" Jan 27 12:18:01 crc kubenswrapper[4745]: E0127 12:18:01.113390 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aa387d05eb73f5ff660492280b771a6e6bf0d682704cf2c55afc23f19fa08fc\": container with ID starting with 2aa387d05eb73f5ff660492280b771a6e6bf0d682704cf2c55afc23f19fa08fc not found: ID does not exist" containerID="2aa387d05eb73f5ff660492280b771a6e6bf0d682704cf2c55afc23f19fa08fc" Jan 27 12:18:01 crc kubenswrapper[4745]: I0127 12:18:01.113422 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aa387d05eb73f5ff660492280b771a6e6bf0d682704cf2c55afc23f19fa08fc"} err="failed to get container status \"2aa387d05eb73f5ff660492280b771a6e6bf0d682704cf2c55afc23f19fa08fc\": rpc error: code = NotFound desc = could not find container \"2aa387d05eb73f5ff660492280b771a6e6bf0d682704cf2c55afc23f19fa08fc\": container with ID starting with 2aa387d05eb73f5ff660492280b771a6e6bf0d682704cf2c55afc23f19fa08fc not found: ID does not exist" Jan 27 12:18:01 crc kubenswrapper[4745]: I0127 12:18:01.139615 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 12:18:01 crc kubenswrapper[4745]: I0127 12:18:01.258574 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 12:18:01 crc kubenswrapper[4745]: I0127 12:18:01.455390 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 12:18:01 crc kubenswrapper[4745]: I0127 12:18:01.546998 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 12:18:01 crc kubenswrapper[4745]: I0127 12:18:01.633197 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 12:18:01 crc kubenswrapper[4745]: I0127 12:18:01.678463 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 12:18:01 crc kubenswrapper[4745]: I0127 12:18:01.796501 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 12:18:01 crc kubenswrapper[4745]: I0127 12:18:01.917611 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p"] Jan 27 12:18:01 crc kubenswrapper[4745]: I0127 12:18:01.917874 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" podUID="0fecdc41-ce07-4598-8b9b-92ef20b9cbf9" containerName="route-controller-manager" containerID="cri-o://2bc895b11b266e2683da00535afeea4f601fe625bea42c0fb7443934f74f12ab" gracePeriod=30 Jan 27 12:18:01 crc kubenswrapper[4745]: I0127 12:18:01.922627 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66df77f6dd-8xdt6"] Jan 27 12:18:01 crc kubenswrapper[4745]: I0127 12:18:01.922826 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" podUID="e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7" containerName="controller-manager" containerID="cri-o://f376bbcd622cc7b4208bbe8df375c214f4668876ba745a56f69cbeb55a2489f5" gracePeriod=30 Jan 27 12:18:02 crc kubenswrapper[4745]: E0127 12:18:02.066301 4745 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1e76ca9_2a3f_4add_8f6f_6c3abe26c3f7.slice/crio-f376bbcd622cc7b4208bbe8df375c214f4668876ba745a56f69cbeb55a2489f5.scope\": RecentStats: unable to find data in memory cache]" Jan 27 12:18:02 crc kubenswrapper[4745]: I0127 12:18:02.082136 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 12:18:02 crc kubenswrapper[4745]: I0127 12:18:02.290000 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 12:18:02 crc kubenswrapper[4745]: I0127 12:18:02.582491 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 12:18:02 crc kubenswrapper[4745]: I0127 12:18:02.583852 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 12:18:02 crc kubenswrapper[4745]: I0127 12:18:02.585283 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 12:18:02 crc kubenswrapper[4745]: I0127 12:18:02.590235 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 12:18:02 crc kubenswrapper[4745]: I0127 12:18:02.658052 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 12:18:02 crc kubenswrapper[4745]: I0127 12:18:02.717635 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.081963 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.110568 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8c895b4dc-b2htf"] Jan 27 12:18:03 crc kubenswrapper[4745]: E0127 12:18:03.110973 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" containerName="pruner" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.111020 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" containerName="pruner" Jan 27 12:18:03 crc kubenswrapper[4745]: E0127 12:18:03.111043 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7" containerName="controller-manager" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.111054 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7" containerName="controller-manager" Jan 27 12:18:03 crc kubenswrapper[4745]: E0127 12:18:03.111097 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" containerName="installer" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.111106 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" containerName="installer" Jan 27 12:18:03 crc kubenswrapper[4745]: E0127 12:18:03.111115 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.111121 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.111308 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1170ea3-dafb-4e36-b873-bc1e339b86b9" containerName="installer" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.111355 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.111366 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7" containerName="controller-manager" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.111380 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2dce2e2-534d-417d-a4f7-d945631a53b4" containerName="pruner" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.112100 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.123705 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8c895b4dc-b2htf"] Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.137675 4745 generic.go:334] "Generic (PLEG): container finished" podID="0fecdc41-ce07-4598-8b9b-92ef20b9cbf9" containerID="2bc895b11b266e2683da00535afeea4f601fe625bea42c0fb7443934f74f12ab" exitCode=0 Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.137752 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" event={"ID":"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9","Type":"ContainerDied","Data":"2bc895b11b266e2683da00535afeea4f601fe625bea42c0fb7443934f74f12ab"} Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.137779 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" event={"ID":"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9","Type":"ContainerDied","Data":"39671989f37ebe7c0551a560fecca8e8042ca0c476446925538d5de1b06aaae7"} Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.137794 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39671989f37ebe7c0551a560fecca8e8042ca0c476446925538d5de1b06aaae7" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.138384 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.139284 4745 generic.go:334] "Generic (PLEG): container finished" podID="e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7" containerID="f376bbcd622cc7b4208bbe8df375c214f4668876ba745a56f69cbeb55a2489f5" exitCode=0 Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.139318 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" event={"ID":"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7","Type":"ContainerDied","Data":"f376bbcd622cc7b4208bbe8df375c214f4668876ba745a56f69cbeb55a2489f5"} Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.139345 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" event={"ID":"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7","Type":"ContainerDied","Data":"3bd6842fc96bd42fd11ef26f4cff151a116e2e813499a4258e7911f3d51b7750"} Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.139362 4745 scope.go:117] "RemoveContainer" containerID="f376bbcd622cc7b4208bbe8df375c214f4668876ba745a56f69cbeb55a2489f5" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.139370 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66df77f6dd-8xdt6" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.167083 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.172553 4745 scope.go:117] "RemoveContainer" containerID="f376bbcd622cc7b4208bbe8df375c214f4668876ba745a56f69cbeb55a2489f5" Jan 27 12:18:03 crc kubenswrapper[4745]: E0127 12:18:03.173020 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f376bbcd622cc7b4208bbe8df375c214f4668876ba745a56f69cbeb55a2489f5\": container with ID starting with f376bbcd622cc7b4208bbe8df375c214f4668876ba745a56f69cbeb55a2489f5 not found: ID does not exist" containerID="f376bbcd622cc7b4208bbe8df375c214f4668876ba745a56f69cbeb55a2489f5" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.173063 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f376bbcd622cc7b4208bbe8df375c214f4668876ba745a56f69cbeb55a2489f5"} err="failed to get container status \"f376bbcd622cc7b4208bbe8df375c214f4668876ba745a56f69cbeb55a2489f5\": rpc error: code = NotFound desc = could not find container \"f376bbcd622cc7b4208bbe8df375c214f4668876ba745a56f69cbeb55a2489f5\": container with ID starting with f376bbcd622cc7b4208bbe8df375c214f4668876ba745a56f69cbeb55a2489f5 not found: ID does not exist" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.194576 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-client-ca\") pod \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\" (UID: \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\") " Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.194679 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-serving-cert\") pod \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\" (UID: \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\") " Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.194709 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-config\") pod \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\" (UID: \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\") " Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.194733 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-proxy-ca-bundles\") pod \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\" (UID: \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\") " Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.194779 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv5w6\" (UniqueName: \"kubernetes.io/projected/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-kube-api-access-fv5w6\") pod \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\" (UID: \"e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7\") " Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.195971 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-config" (OuterVolumeSpecName: "config") pod "e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7" (UID: "e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.196005 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7" (UID: "e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.196008 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e3273e6-4970-4fae-915c-8333f3c91d3f-client-ca\") pod \"controller-manager-8c895b4dc-b2htf\" (UID: \"7e3273e6-4970-4fae-915c-8333f3c91d3f\") " pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.196162 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e3273e6-4970-4fae-915c-8333f3c91d3f-serving-cert\") pod \"controller-manager-8c895b4dc-b2htf\" (UID: \"7e3273e6-4970-4fae-915c-8333f3c91d3f\") " pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.196210 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e3273e6-4970-4fae-915c-8333f3c91d3f-config\") pod \"controller-manager-8c895b4dc-b2htf\" (UID: \"7e3273e6-4970-4fae-915c-8333f3c91d3f\") " pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.196252 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e3273e6-4970-4fae-915c-8333f3c91d3f-proxy-ca-bundles\") pod \"controller-manager-8c895b4dc-b2htf\" (UID: \"7e3273e6-4970-4fae-915c-8333f3c91d3f\") " pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.196321 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqfxs\" (UniqueName: \"kubernetes.io/projected/7e3273e6-4970-4fae-915c-8333f3c91d3f-kube-api-access-vqfxs\") pod \"controller-manager-8c895b4dc-b2htf\" (UID: \"7e3273e6-4970-4fae-915c-8333f3c91d3f\") " pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.196442 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-client-ca" (OuterVolumeSpecName: "client-ca") pod "e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7" (UID: "e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.196501 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.196523 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.200967 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-kube-api-access-fv5w6" (OuterVolumeSpecName: "kube-api-access-fv5w6") pod "e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7" (UID: "e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7"). InnerVolumeSpecName "kube-api-access-fv5w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.200979 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7" (UID: "e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.297244 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-client-ca\") pod \"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9\" (UID: \"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9\") " Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.297356 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjvrd\" (UniqueName: \"kubernetes.io/projected/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-kube-api-access-vjvrd\") pod \"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9\" (UID: \"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9\") " Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.297382 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-serving-cert\") pod \"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9\" (UID: \"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9\") " Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.297438 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-config\") pod \"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9\" (UID: \"0fecdc41-ce07-4598-8b9b-92ef20b9cbf9\") " Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.297668 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e3273e6-4970-4fae-915c-8333f3c91d3f-client-ca\") pod \"controller-manager-8c895b4dc-b2htf\" (UID: \"7e3273e6-4970-4fae-915c-8333f3c91d3f\") " pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.297729 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e3273e6-4970-4fae-915c-8333f3c91d3f-serving-cert\") pod \"controller-manager-8c895b4dc-b2htf\" (UID: \"7e3273e6-4970-4fae-915c-8333f3c91d3f\") " pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.297758 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e3273e6-4970-4fae-915c-8333f3c91d3f-config\") pod \"controller-manager-8c895b4dc-b2htf\" (UID: \"7e3273e6-4970-4fae-915c-8333f3c91d3f\") " pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.297778 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e3273e6-4970-4fae-915c-8333f3c91d3f-proxy-ca-bundles\") pod \"controller-manager-8c895b4dc-b2htf\" (UID: \"7e3273e6-4970-4fae-915c-8333f3c91d3f\") " pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.297860 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqfxs\" (UniqueName: \"kubernetes.io/projected/7e3273e6-4970-4fae-915c-8333f3c91d3f-kube-api-access-vqfxs\") pod \"controller-manager-8c895b4dc-b2htf\" (UID: \"7e3273e6-4970-4fae-915c-8333f3c91d3f\") " pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.297918 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv5w6\" (UniqueName: \"kubernetes.io/projected/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-kube-api-access-fv5w6\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.297932 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.297945 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.299283 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-client-ca" (OuterVolumeSpecName: "client-ca") pod "0fecdc41-ce07-4598-8b9b-92ef20b9cbf9" (UID: "0fecdc41-ce07-4598-8b9b-92ef20b9cbf9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.302103 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-kube-api-access-vjvrd" (OuterVolumeSpecName: "kube-api-access-vjvrd") pod "0fecdc41-ce07-4598-8b9b-92ef20b9cbf9" (UID: "0fecdc41-ce07-4598-8b9b-92ef20b9cbf9"). InnerVolumeSpecName "kube-api-access-vjvrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.306191 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e3273e6-4970-4fae-915c-8333f3c91d3f-serving-cert\") pod \"controller-manager-8c895b4dc-b2htf\" (UID: \"7e3273e6-4970-4fae-915c-8333f3c91d3f\") " pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.307559 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e3273e6-4970-4fae-915c-8333f3c91d3f-config\") pod \"controller-manager-8c895b4dc-b2htf\" (UID: \"7e3273e6-4970-4fae-915c-8333f3c91d3f\") " pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.308801 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e3273e6-4970-4fae-915c-8333f3c91d3f-proxy-ca-bundles\") pod \"controller-manager-8c895b4dc-b2htf\" (UID: \"7e3273e6-4970-4fae-915c-8333f3c91d3f\") " pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.310580 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0fecdc41-ce07-4598-8b9b-92ef20b9cbf9" (UID: "0fecdc41-ce07-4598-8b9b-92ef20b9cbf9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.311200 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-config" (OuterVolumeSpecName: "config") pod "0fecdc41-ce07-4598-8b9b-92ef20b9cbf9" (UID: "0fecdc41-ce07-4598-8b9b-92ef20b9cbf9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.318570 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqfxs\" (UniqueName: \"kubernetes.io/projected/7e3273e6-4970-4fae-915c-8333f3c91d3f-kube-api-access-vqfxs\") pod \"controller-manager-8c895b4dc-b2htf\" (UID: \"7e3273e6-4970-4fae-915c-8333f3c91d3f\") " pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.331208 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e3273e6-4970-4fae-915c-8333f3c91d3f-client-ca\") pod \"controller-manager-8c895b4dc-b2htf\" (UID: \"7e3273e6-4970-4fae-915c-8333f3c91d3f\") " pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.356840 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.399931 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.399967 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjvrd\" (UniqueName: \"kubernetes.io/projected/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-kube-api-access-vjvrd\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.400399 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.400407 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.436538 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.448310 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.468850 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66df77f6dd-8xdt6"] Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.472870 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-66df77f6dd-8xdt6"] Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.479677 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.620147 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.635951 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.765092 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.805288 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.975082 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 12:18:03 crc kubenswrapper[4745]: I0127 12:18:03.990246 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 12:18:04 crc kubenswrapper[4745]: I0127 12:18:04.080918 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7" path="/var/lib/kubelet/pods/e1e76ca9-2a3f-4add-8f6f-6c3abe26c3f7/volumes" Jan 27 12:18:04 crc kubenswrapper[4745]: I0127 12:18:04.146374 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p" Jan 27 12:18:04 crc kubenswrapper[4745]: I0127 12:18:04.170208 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p"] Jan 27 12:18:04 crc kubenswrapper[4745]: I0127 12:18:04.180557 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d7f49f5-mfq2p"] Jan 27 12:18:04 crc kubenswrapper[4745]: I0127 12:18:04.426927 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 12:18:04 crc kubenswrapper[4745]: I0127 12:18:04.766797 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 12:18:05 crc kubenswrapper[4745]: I0127 12:18:05.023377 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 12:18:05 crc kubenswrapper[4745]: I0127 12:18:05.463987 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 12:18:05 crc kubenswrapper[4745]: I0127 12:18:05.538107 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 12:18:05 crc kubenswrapper[4745]: I0127 12:18:05.826450 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878cbb874-m826x"] Jan 27 12:18:05 crc kubenswrapper[4745]: E0127 12:18:05.826688 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fecdc41-ce07-4598-8b9b-92ef20b9cbf9" containerName="route-controller-manager" Jan 27 12:18:05 crc kubenswrapper[4745]: I0127 12:18:05.826703 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fecdc41-ce07-4598-8b9b-92ef20b9cbf9" containerName="route-controller-manager" Jan 27 12:18:05 crc kubenswrapper[4745]: I0127 12:18:05.826876 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fecdc41-ce07-4598-8b9b-92ef20b9cbf9" containerName="route-controller-manager" Jan 27 12:18:05 crc kubenswrapper[4745]: I0127 12:18:05.827307 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:05 crc kubenswrapper[4745]: I0127 12:18:05.833054 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 12:18:05 crc kubenswrapper[4745]: I0127 12:18:05.833452 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 12:18:05 crc kubenswrapper[4745]: I0127 12:18:05.834312 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 12:18:05 crc kubenswrapper[4745]: I0127 12:18:05.834852 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 12:18:05 crc kubenswrapper[4745]: I0127 12:18:05.835113 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 12:18:05 crc kubenswrapper[4745]: I0127 12:18:05.835499 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 12:18:05 crc kubenswrapper[4745]: I0127 12:18:05.837577 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 12:18:05 crc kubenswrapper[4745]: I0127 12:18:05.847024 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 12:18:05 crc kubenswrapper[4745]: I0127 12:18:05.872832 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878cbb874-m826x"] Jan 27 12:18:05 crc kubenswrapper[4745]: I0127 12:18:05.951735 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs4v2\" (UniqueName: \"kubernetes.io/projected/c76d68ee-3702-45ec-9e6f-62d42520ce7d-kube-api-access-qs4v2\") pod \"route-controller-manager-878cbb874-m826x\" (UID: \"c76d68ee-3702-45ec-9e6f-62d42520ce7d\") " pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:05 crc kubenswrapper[4745]: I0127 12:18:05.951997 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c76d68ee-3702-45ec-9e6f-62d42520ce7d-client-ca\") pod \"route-controller-manager-878cbb874-m826x\" (UID: \"c76d68ee-3702-45ec-9e6f-62d42520ce7d\") " pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:05 crc kubenswrapper[4745]: I0127 12:18:05.952060 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c76d68ee-3702-45ec-9e6f-62d42520ce7d-serving-cert\") pod \"route-controller-manager-878cbb874-m826x\" (UID: \"c76d68ee-3702-45ec-9e6f-62d42520ce7d\") " pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:05 crc kubenswrapper[4745]: I0127 12:18:05.952146 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c76d68ee-3702-45ec-9e6f-62d42520ce7d-config\") pod \"route-controller-manager-878cbb874-m826x\" (UID: \"c76d68ee-3702-45ec-9e6f-62d42520ce7d\") " pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:06 crc kubenswrapper[4745]: I0127 12:18:06.053653 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs4v2\" (UniqueName: \"kubernetes.io/projected/c76d68ee-3702-45ec-9e6f-62d42520ce7d-kube-api-access-qs4v2\") pod \"route-controller-manager-878cbb874-m826x\" (UID: \"c76d68ee-3702-45ec-9e6f-62d42520ce7d\") " pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:06 crc kubenswrapper[4745]: I0127 12:18:06.054871 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c76d68ee-3702-45ec-9e6f-62d42520ce7d-client-ca\") pod \"route-controller-manager-878cbb874-m826x\" (UID: \"c76d68ee-3702-45ec-9e6f-62d42520ce7d\") " pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:06 crc kubenswrapper[4745]: I0127 12:18:06.056436 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c76d68ee-3702-45ec-9e6f-62d42520ce7d-serving-cert\") pod \"route-controller-manager-878cbb874-m826x\" (UID: \"c76d68ee-3702-45ec-9e6f-62d42520ce7d\") " pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:06 crc kubenswrapper[4745]: I0127 12:18:06.056516 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c76d68ee-3702-45ec-9e6f-62d42520ce7d-config\") pod \"route-controller-manager-878cbb874-m826x\" (UID: \"c76d68ee-3702-45ec-9e6f-62d42520ce7d\") " pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:06 crc kubenswrapper[4745]: I0127 12:18:06.056360 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c76d68ee-3702-45ec-9e6f-62d42520ce7d-client-ca\") pod \"route-controller-manager-878cbb874-m826x\" (UID: \"c76d68ee-3702-45ec-9e6f-62d42520ce7d\") " pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:06 crc kubenswrapper[4745]: I0127 12:18:06.058351 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c76d68ee-3702-45ec-9e6f-62d42520ce7d-config\") pod \"route-controller-manager-878cbb874-m826x\" (UID: \"c76d68ee-3702-45ec-9e6f-62d42520ce7d\") " pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:06 crc kubenswrapper[4745]: I0127 12:18:06.074373 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c76d68ee-3702-45ec-9e6f-62d42520ce7d-serving-cert\") pod \"route-controller-manager-878cbb874-m826x\" (UID: \"c76d68ee-3702-45ec-9e6f-62d42520ce7d\") " pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:06 crc kubenswrapper[4745]: I0127 12:18:06.075254 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs4v2\" (UniqueName: \"kubernetes.io/projected/c76d68ee-3702-45ec-9e6f-62d42520ce7d-kube-api-access-qs4v2\") pod \"route-controller-manager-878cbb874-m826x\" (UID: \"c76d68ee-3702-45ec-9e6f-62d42520ce7d\") " pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:06 crc kubenswrapper[4745]: I0127 12:18:06.091020 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fecdc41-ce07-4598-8b9b-92ef20b9cbf9" path="/var/lib/kubelet/pods/0fecdc41-ce07-4598-8b9b-92ef20b9cbf9/volumes" Jan 27 12:18:06 crc kubenswrapper[4745]: I0127 12:18:06.121252 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 12:18:06 crc kubenswrapper[4745]: I0127 12:18:06.173585 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:06 crc kubenswrapper[4745]: E0127 12:18:06.330434 4745 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 27 12:18:06 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-8c895b4dc-b2htf_openshift-controller-manager_7e3273e6-4970-4fae-915c-8333f3c91d3f_0(6e0402eb60359a32ebe113ec79caee27bab09385b39f60206d0940e8bd6819ef): error adding pod openshift-controller-manager_controller-manager-8c895b4dc-b2htf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6e0402eb60359a32ebe113ec79caee27bab09385b39f60206d0940e8bd6819ef" Netns:"/var/run/netns/cfca7e10-b499-43d7-aa4a-b0f1bccfab75" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-8c895b4dc-b2htf;K8S_POD_INFRA_CONTAINER_ID=6e0402eb60359a32ebe113ec79caee27bab09385b39f60206d0940e8bd6819ef;K8S_POD_UID=7e3273e6-4970-4fae-915c-8333f3c91d3f" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-8c895b4dc-b2htf] networking: Multus: [openshift-controller-manager/controller-manager-8c895b4dc-b2htf/7e3273e6-4970-4fae-915c-8333f3c91d3f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-8c895b4dc-b2htf in out of cluster comm: pod "controller-manager-8c895b4dc-b2htf" not found Jan 27 12:18:06 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 12:18:06 crc kubenswrapper[4745]: > Jan 27 12:18:06 crc kubenswrapper[4745]: E0127 12:18:06.330530 4745 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 27 12:18:06 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-8c895b4dc-b2htf_openshift-controller-manager_7e3273e6-4970-4fae-915c-8333f3c91d3f_0(6e0402eb60359a32ebe113ec79caee27bab09385b39f60206d0940e8bd6819ef): error adding pod openshift-controller-manager_controller-manager-8c895b4dc-b2htf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6e0402eb60359a32ebe113ec79caee27bab09385b39f60206d0940e8bd6819ef" Netns:"/var/run/netns/cfca7e10-b499-43d7-aa4a-b0f1bccfab75" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-8c895b4dc-b2htf;K8S_POD_INFRA_CONTAINER_ID=6e0402eb60359a32ebe113ec79caee27bab09385b39f60206d0940e8bd6819ef;K8S_POD_UID=7e3273e6-4970-4fae-915c-8333f3c91d3f" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-8c895b4dc-b2htf] networking: Multus: [openshift-controller-manager/controller-manager-8c895b4dc-b2htf/7e3273e6-4970-4fae-915c-8333f3c91d3f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-8c895b4dc-b2htf in out of cluster comm: pod "controller-manager-8c895b4dc-b2htf" not found Jan 27 12:18:06 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 12:18:06 crc kubenswrapper[4745]: > pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:06 crc kubenswrapper[4745]: E0127 12:18:06.330555 4745 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 27 12:18:06 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-8c895b4dc-b2htf_openshift-controller-manager_7e3273e6-4970-4fae-915c-8333f3c91d3f_0(6e0402eb60359a32ebe113ec79caee27bab09385b39f60206d0940e8bd6819ef): error adding pod openshift-controller-manager_controller-manager-8c895b4dc-b2htf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6e0402eb60359a32ebe113ec79caee27bab09385b39f60206d0940e8bd6819ef" Netns:"/var/run/netns/cfca7e10-b499-43d7-aa4a-b0f1bccfab75" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-8c895b4dc-b2htf;K8S_POD_INFRA_CONTAINER_ID=6e0402eb60359a32ebe113ec79caee27bab09385b39f60206d0940e8bd6819ef;K8S_POD_UID=7e3273e6-4970-4fae-915c-8333f3c91d3f" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-8c895b4dc-b2htf] networking: Multus: [openshift-controller-manager/controller-manager-8c895b4dc-b2htf/7e3273e6-4970-4fae-915c-8333f3c91d3f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-8c895b4dc-b2htf in out of cluster comm: pod "controller-manager-8c895b4dc-b2htf" not found Jan 27 12:18:06 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 12:18:06 crc kubenswrapper[4745]: > pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:06 crc kubenswrapper[4745]: E0127 12:18:06.330608 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-8c895b4dc-b2htf_openshift-controller-manager(7e3273e6-4970-4fae-915c-8333f3c91d3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-8c895b4dc-b2htf_openshift-controller-manager(7e3273e6-4970-4fae-915c-8333f3c91d3f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-8c895b4dc-b2htf_openshift-controller-manager_7e3273e6-4970-4fae-915c-8333f3c91d3f_0(6e0402eb60359a32ebe113ec79caee27bab09385b39f60206d0940e8bd6819ef): error adding pod openshift-controller-manager_controller-manager-8c895b4dc-b2htf to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"6e0402eb60359a32ebe113ec79caee27bab09385b39f60206d0940e8bd6819ef\\\" Netns:\\\"/var/run/netns/cfca7e10-b499-43d7-aa4a-b0f1bccfab75\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-8c895b4dc-b2htf;K8S_POD_INFRA_CONTAINER_ID=6e0402eb60359a32ebe113ec79caee27bab09385b39f60206d0940e8bd6819ef;K8S_POD_UID=7e3273e6-4970-4fae-915c-8333f3c91d3f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-8c895b4dc-b2htf] networking: Multus: [openshift-controller-manager/controller-manager-8c895b4dc-b2htf/7e3273e6-4970-4fae-915c-8333f3c91d3f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-8c895b4dc-b2htf in out of cluster comm: pod \\\"controller-manager-8c895b4dc-b2htf\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" podUID="7e3273e6-4970-4fae-915c-8333f3c91d3f" Jan 27 12:18:06 crc kubenswrapper[4745]: I0127 12:18:06.374693 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 12:18:06 crc kubenswrapper[4745]: I0127 12:18:06.619544 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 12:18:06 crc kubenswrapper[4745]: I0127 12:18:06.691163 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 12:18:06 crc kubenswrapper[4745]: I0127 12:18:06.862009 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 12:18:06 crc kubenswrapper[4745]: I0127 12:18:06.937353 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 12:18:07 crc kubenswrapper[4745]: I0127 12:18:07.020661 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 12:18:07 crc kubenswrapper[4745]: I0127 12:18:07.167422 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:07 crc kubenswrapper[4745]: I0127 12:18:07.169107 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:07 crc kubenswrapper[4745]: I0127 12:18:07.273920 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 12:18:07 crc kubenswrapper[4745]: I0127 12:18:07.432325 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 12:18:07 crc kubenswrapper[4745]: I0127 12:18:07.455145 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 12:18:07 crc kubenswrapper[4745]: I0127 12:18:07.849149 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 12:18:07 crc kubenswrapper[4745]: I0127 12:18:07.849537 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 12:18:08 crc kubenswrapper[4745]: I0127 12:18:08.444800 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 12:18:08 crc kubenswrapper[4745]: I0127 12:18:08.476474 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 12:18:09 crc kubenswrapper[4745]: E0127 12:18:09.066576 4745 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 27 12:18:09 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-878cbb874-m826x_openshift-route-controller-manager_c76d68ee-3702-45ec-9e6f-62d42520ce7d_0(ebe6c3dba928fee739193b4cc1e1e24f7adebf6c3edd5a7c26e18f59b6b36e4e): error adding pod openshift-route-controller-manager_route-controller-manager-878cbb874-m826x to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ebe6c3dba928fee739193b4cc1e1e24f7adebf6c3edd5a7c26e18f59b6b36e4e" Netns:"/var/run/netns/96d744b7-138a-4415-a096-fedc6e80d9dd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-878cbb874-m826x;K8S_POD_INFRA_CONTAINER_ID=ebe6c3dba928fee739193b4cc1e1e24f7adebf6c3edd5a7c26e18f59b6b36e4e;K8S_POD_UID=c76d68ee-3702-45ec-9e6f-62d42520ce7d" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-878cbb874-m826x] networking: Multus: [openshift-route-controller-manager/route-controller-manager-878cbb874-m826x/c76d68ee-3702-45ec-9e6f-62d42520ce7d]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-878cbb874-m826x in out of cluster comm: pod "route-controller-manager-878cbb874-m826x" not found Jan 27 12:18:09 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 12:18:09 crc kubenswrapper[4745]: > Jan 27 12:18:09 crc kubenswrapper[4745]: E0127 12:18:09.066928 4745 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 27 12:18:09 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-878cbb874-m826x_openshift-route-controller-manager_c76d68ee-3702-45ec-9e6f-62d42520ce7d_0(ebe6c3dba928fee739193b4cc1e1e24f7adebf6c3edd5a7c26e18f59b6b36e4e): error adding pod openshift-route-controller-manager_route-controller-manager-878cbb874-m826x to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ebe6c3dba928fee739193b4cc1e1e24f7adebf6c3edd5a7c26e18f59b6b36e4e" Netns:"/var/run/netns/96d744b7-138a-4415-a096-fedc6e80d9dd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-878cbb874-m826x;K8S_POD_INFRA_CONTAINER_ID=ebe6c3dba928fee739193b4cc1e1e24f7adebf6c3edd5a7c26e18f59b6b36e4e;K8S_POD_UID=c76d68ee-3702-45ec-9e6f-62d42520ce7d" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-878cbb874-m826x] networking: Multus: [openshift-route-controller-manager/route-controller-manager-878cbb874-m826x/c76d68ee-3702-45ec-9e6f-62d42520ce7d]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-878cbb874-m826x in out of cluster comm: pod "route-controller-manager-878cbb874-m826x" not found Jan 27 12:18:09 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 12:18:09 crc kubenswrapper[4745]: > pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:09 crc kubenswrapper[4745]: E0127 12:18:09.066951 4745 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 27 12:18:09 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-878cbb874-m826x_openshift-route-controller-manager_c76d68ee-3702-45ec-9e6f-62d42520ce7d_0(ebe6c3dba928fee739193b4cc1e1e24f7adebf6c3edd5a7c26e18f59b6b36e4e): error adding pod openshift-route-controller-manager_route-controller-manager-878cbb874-m826x to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ebe6c3dba928fee739193b4cc1e1e24f7adebf6c3edd5a7c26e18f59b6b36e4e" Netns:"/var/run/netns/96d744b7-138a-4415-a096-fedc6e80d9dd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-878cbb874-m826x;K8S_POD_INFRA_CONTAINER_ID=ebe6c3dba928fee739193b4cc1e1e24f7adebf6c3edd5a7c26e18f59b6b36e4e;K8S_POD_UID=c76d68ee-3702-45ec-9e6f-62d42520ce7d" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-878cbb874-m826x] networking: Multus: [openshift-route-controller-manager/route-controller-manager-878cbb874-m826x/c76d68ee-3702-45ec-9e6f-62d42520ce7d]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-878cbb874-m826x in out of cluster comm: pod "route-controller-manager-878cbb874-m826x" not found Jan 27 12:18:09 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 12:18:09 crc kubenswrapper[4745]: > pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:09 crc kubenswrapper[4745]: E0127 12:18:09.067020 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-878cbb874-m826x_openshift-route-controller-manager(c76d68ee-3702-45ec-9e6f-62d42520ce7d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-878cbb874-m826x_openshift-route-controller-manager(c76d68ee-3702-45ec-9e6f-62d42520ce7d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-878cbb874-m826x_openshift-route-controller-manager_c76d68ee-3702-45ec-9e6f-62d42520ce7d_0(ebe6c3dba928fee739193b4cc1e1e24f7adebf6c3edd5a7c26e18f59b6b36e4e): error adding pod openshift-route-controller-manager_route-controller-manager-878cbb874-m826x to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"ebe6c3dba928fee739193b4cc1e1e24f7adebf6c3edd5a7c26e18f59b6b36e4e\\\" Netns:\\\"/var/run/netns/96d744b7-138a-4415-a096-fedc6e80d9dd\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-878cbb874-m826x;K8S_POD_INFRA_CONTAINER_ID=ebe6c3dba928fee739193b4cc1e1e24f7adebf6c3edd5a7c26e18f59b6b36e4e;K8S_POD_UID=c76d68ee-3702-45ec-9e6f-62d42520ce7d\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-878cbb874-m826x] networking: Multus: [openshift-route-controller-manager/route-controller-manager-878cbb874-m826x/c76d68ee-3702-45ec-9e6f-62d42520ce7d]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-878cbb874-m826x in out of cluster comm: pod \\\"route-controller-manager-878cbb874-m826x\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" podUID="c76d68ee-3702-45ec-9e6f-62d42520ce7d" Jan 27 12:18:09 crc kubenswrapper[4745]: I0127 12:18:09.176883 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:09 crc kubenswrapper[4745]: I0127 12:18:09.177418 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:09 crc kubenswrapper[4745]: I0127 12:18:09.239544 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 12:18:09 crc kubenswrapper[4745]: I0127 12:18:09.454534 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 12:18:09 crc kubenswrapper[4745]: I0127 12:18:09.610635 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 12:18:09 crc kubenswrapper[4745]: I0127 12:18:09.680719 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 12:18:09 crc kubenswrapper[4745]: I0127 12:18:09.762431 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 12:18:09 crc kubenswrapper[4745]: I0127 12:18:09.825901 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 12:18:09 crc kubenswrapper[4745]: I0127 12:18:09.878038 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 12:18:10 crc kubenswrapper[4745]: E0127 12:18:10.050535 4745 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 27 12:18:10 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-8c895b4dc-b2htf_openshift-controller-manager_7e3273e6-4970-4fae-915c-8333f3c91d3f_0(cab915fee47f82054fdf44378cdafa1d3295f3126dd3a80ea6d4d98637e9ffef): error adding pod openshift-controller-manager_controller-manager-8c895b4dc-b2htf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"cab915fee47f82054fdf44378cdafa1d3295f3126dd3a80ea6d4d98637e9ffef" Netns:"/var/run/netns/ca502ab3-7eda-4ed0-b14e-cd60f2948953" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-8c895b4dc-b2htf;K8S_POD_INFRA_CONTAINER_ID=cab915fee47f82054fdf44378cdafa1d3295f3126dd3a80ea6d4d98637e9ffef;K8S_POD_UID=7e3273e6-4970-4fae-915c-8333f3c91d3f" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-8c895b4dc-b2htf] networking: Multus: [openshift-controller-manager/controller-manager-8c895b4dc-b2htf/7e3273e6-4970-4fae-915c-8333f3c91d3f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-8c895b4dc-b2htf in out of cluster comm: pod "controller-manager-8c895b4dc-b2htf" not found Jan 27 12:18:10 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 12:18:10 crc kubenswrapper[4745]: > Jan 27 12:18:10 crc kubenswrapper[4745]: E0127 12:18:10.050999 4745 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 27 12:18:10 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-8c895b4dc-b2htf_openshift-controller-manager_7e3273e6-4970-4fae-915c-8333f3c91d3f_0(cab915fee47f82054fdf44378cdafa1d3295f3126dd3a80ea6d4d98637e9ffef): error adding pod openshift-controller-manager_controller-manager-8c895b4dc-b2htf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"cab915fee47f82054fdf44378cdafa1d3295f3126dd3a80ea6d4d98637e9ffef" Netns:"/var/run/netns/ca502ab3-7eda-4ed0-b14e-cd60f2948953" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-8c895b4dc-b2htf;K8S_POD_INFRA_CONTAINER_ID=cab915fee47f82054fdf44378cdafa1d3295f3126dd3a80ea6d4d98637e9ffef;K8S_POD_UID=7e3273e6-4970-4fae-915c-8333f3c91d3f" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-8c895b4dc-b2htf] networking: Multus: [openshift-controller-manager/controller-manager-8c895b4dc-b2htf/7e3273e6-4970-4fae-915c-8333f3c91d3f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-8c895b4dc-b2htf in out of cluster comm: pod "controller-manager-8c895b4dc-b2htf" not found Jan 27 12:18:10 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 12:18:10 crc kubenswrapper[4745]: > pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:10 crc kubenswrapper[4745]: E0127 12:18:10.051029 4745 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 27 12:18:10 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-8c895b4dc-b2htf_openshift-controller-manager_7e3273e6-4970-4fae-915c-8333f3c91d3f_0(cab915fee47f82054fdf44378cdafa1d3295f3126dd3a80ea6d4d98637e9ffef): error adding pod openshift-controller-manager_controller-manager-8c895b4dc-b2htf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"cab915fee47f82054fdf44378cdafa1d3295f3126dd3a80ea6d4d98637e9ffef" Netns:"/var/run/netns/ca502ab3-7eda-4ed0-b14e-cd60f2948953" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-8c895b4dc-b2htf;K8S_POD_INFRA_CONTAINER_ID=cab915fee47f82054fdf44378cdafa1d3295f3126dd3a80ea6d4d98637e9ffef;K8S_POD_UID=7e3273e6-4970-4fae-915c-8333f3c91d3f" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-8c895b4dc-b2htf] networking: Multus: [openshift-controller-manager/controller-manager-8c895b4dc-b2htf/7e3273e6-4970-4fae-915c-8333f3c91d3f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-8c895b4dc-b2htf in out of cluster comm: pod "controller-manager-8c895b4dc-b2htf" not found Jan 27 12:18:10 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 12:18:10 crc kubenswrapper[4745]: > pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:10 crc kubenswrapper[4745]: E0127 12:18:10.051087 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-8c895b4dc-b2htf_openshift-controller-manager(7e3273e6-4970-4fae-915c-8333f3c91d3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-8c895b4dc-b2htf_openshift-controller-manager(7e3273e6-4970-4fae-915c-8333f3c91d3f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-8c895b4dc-b2htf_openshift-controller-manager_7e3273e6-4970-4fae-915c-8333f3c91d3f_0(cab915fee47f82054fdf44378cdafa1d3295f3126dd3a80ea6d4d98637e9ffef): error adding pod openshift-controller-manager_controller-manager-8c895b4dc-b2htf to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"cab915fee47f82054fdf44378cdafa1d3295f3126dd3a80ea6d4d98637e9ffef\\\" Netns:\\\"/var/run/netns/ca502ab3-7eda-4ed0-b14e-cd60f2948953\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-8c895b4dc-b2htf;K8S_POD_INFRA_CONTAINER_ID=cab915fee47f82054fdf44378cdafa1d3295f3126dd3a80ea6d4d98637e9ffef;K8S_POD_UID=7e3273e6-4970-4fae-915c-8333f3c91d3f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-8c895b4dc-b2htf] networking: Multus: [openshift-controller-manager/controller-manager-8c895b4dc-b2htf/7e3273e6-4970-4fae-915c-8333f3c91d3f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-8c895b4dc-b2htf in out of cluster comm: pod \\\"controller-manager-8c895b4dc-b2htf\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" podUID="7e3273e6-4970-4fae-915c-8333f3c91d3f" Jan 27 12:18:10 crc kubenswrapper[4745]: I0127 12:18:10.078951 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 12:18:10 crc kubenswrapper[4745]: I0127 12:18:10.405758 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 12:18:11 crc kubenswrapper[4745]: I0127 12:18:11.104566 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 12:18:11 crc kubenswrapper[4745]: I0127 12:18:11.105157 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 12:18:11 crc kubenswrapper[4745]: I0127 12:18:11.207832 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 12:18:11 crc kubenswrapper[4745]: I0127 12:18:11.417829 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 12:18:11 crc kubenswrapper[4745]: I0127 12:18:11.507996 4745 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 12:18:11 crc kubenswrapper[4745]: I0127 12:18:11.778573 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 12:18:12 crc kubenswrapper[4745]: E0127 12:18:12.072052 4745 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 27 12:18:12 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-878cbb874-m826x_openshift-route-controller-manager_c76d68ee-3702-45ec-9e6f-62d42520ce7d_0(12ce004c788eafdbe9ce2e4c59fac89d1a694c6c5c7abd6b9ea32fa1c1cd3ff1): error adding pod openshift-route-controller-manager_route-controller-manager-878cbb874-m826x to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"12ce004c788eafdbe9ce2e4c59fac89d1a694c6c5c7abd6b9ea32fa1c1cd3ff1" Netns:"/var/run/netns/5edfe15c-c3b9-4b6c-bcd9-0152a6ead3a2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-878cbb874-m826x;K8S_POD_INFRA_CONTAINER_ID=12ce004c788eafdbe9ce2e4c59fac89d1a694c6c5c7abd6b9ea32fa1c1cd3ff1;K8S_POD_UID=c76d68ee-3702-45ec-9e6f-62d42520ce7d" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-878cbb874-m826x] networking: Multus: [openshift-route-controller-manager/route-controller-manager-878cbb874-m826x/c76d68ee-3702-45ec-9e6f-62d42520ce7d]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-878cbb874-m826x in out of cluster comm: pod "route-controller-manager-878cbb874-m826x" not found Jan 27 12:18:12 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 12:18:12 crc kubenswrapper[4745]: > Jan 27 12:18:12 crc kubenswrapper[4745]: E0127 12:18:12.072419 4745 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 27 12:18:12 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-878cbb874-m826x_openshift-route-controller-manager_c76d68ee-3702-45ec-9e6f-62d42520ce7d_0(12ce004c788eafdbe9ce2e4c59fac89d1a694c6c5c7abd6b9ea32fa1c1cd3ff1): error adding pod openshift-route-controller-manager_route-controller-manager-878cbb874-m826x to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"12ce004c788eafdbe9ce2e4c59fac89d1a694c6c5c7abd6b9ea32fa1c1cd3ff1" Netns:"/var/run/netns/5edfe15c-c3b9-4b6c-bcd9-0152a6ead3a2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-878cbb874-m826x;K8S_POD_INFRA_CONTAINER_ID=12ce004c788eafdbe9ce2e4c59fac89d1a694c6c5c7abd6b9ea32fa1c1cd3ff1;K8S_POD_UID=c76d68ee-3702-45ec-9e6f-62d42520ce7d" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-878cbb874-m826x] networking: Multus: [openshift-route-controller-manager/route-controller-manager-878cbb874-m826x/c76d68ee-3702-45ec-9e6f-62d42520ce7d]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-878cbb874-m826x in out of cluster comm: pod "route-controller-manager-878cbb874-m826x" not found Jan 27 12:18:12 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 12:18:12 crc kubenswrapper[4745]: > pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:12 crc kubenswrapper[4745]: E0127 12:18:12.072437 4745 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 27 12:18:12 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-878cbb874-m826x_openshift-route-controller-manager_c76d68ee-3702-45ec-9e6f-62d42520ce7d_0(12ce004c788eafdbe9ce2e4c59fac89d1a694c6c5c7abd6b9ea32fa1c1cd3ff1): error adding pod openshift-route-controller-manager_route-controller-manager-878cbb874-m826x to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"12ce004c788eafdbe9ce2e4c59fac89d1a694c6c5c7abd6b9ea32fa1c1cd3ff1" Netns:"/var/run/netns/5edfe15c-c3b9-4b6c-bcd9-0152a6ead3a2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-878cbb874-m826x;K8S_POD_INFRA_CONTAINER_ID=12ce004c788eafdbe9ce2e4c59fac89d1a694c6c5c7abd6b9ea32fa1c1cd3ff1;K8S_POD_UID=c76d68ee-3702-45ec-9e6f-62d42520ce7d" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-878cbb874-m826x] networking: Multus: [openshift-route-controller-manager/route-controller-manager-878cbb874-m826x/c76d68ee-3702-45ec-9e6f-62d42520ce7d]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-878cbb874-m826x in out of cluster comm: pod "route-controller-manager-878cbb874-m826x" not found Jan 27 12:18:12 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 12:18:12 crc kubenswrapper[4745]: > pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:12 crc kubenswrapper[4745]: E0127 12:18:12.072489 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-878cbb874-m826x_openshift-route-controller-manager(c76d68ee-3702-45ec-9e6f-62d42520ce7d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-878cbb874-m826x_openshift-route-controller-manager(c76d68ee-3702-45ec-9e6f-62d42520ce7d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-878cbb874-m826x_openshift-route-controller-manager_c76d68ee-3702-45ec-9e6f-62d42520ce7d_0(12ce004c788eafdbe9ce2e4c59fac89d1a694c6c5c7abd6b9ea32fa1c1cd3ff1): error adding pod openshift-route-controller-manager_route-controller-manager-878cbb874-m826x to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"12ce004c788eafdbe9ce2e4c59fac89d1a694c6c5c7abd6b9ea32fa1c1cd3ff1\\\" Netns:\\\"/var/run/netns/5edfe15c-c3b9-4b6c-bcd9-0152a6ead3a2\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-878cbb874-m826x;K8S_POD_INFRA_CONTAINER_ID=12ce004c788eafdbe9ce2e4c59fac89d1a694c6c5c7abd6b9ea32fa1c1cd3ff1;K8S_POD_UID=c76d68ee-3702-45ec-9e6f-62d42520ce7d\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-878cbb874-m826x] networking: Multus: [openshift-route-controller-manager/route-controller-manager-878cbb874-m826x/c76d68ee-3702-45ec-9e6f-62d42520ce7d]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-878cbb874-m826x in out of cluster comm: pod \\\"route-controller-manager-878cbb874-m826x\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" podUID="c76d68ee-3702-45ec-9e6f-62d42520ce7d" Jan 27 12:18:12 crc kubenswrapper[4745]: I0127 12:18:12.300091 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 12:18:12 crc kubenswrapper[4745]: I0127 12:18:12.635173 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 12:18:12 crc kubenswrapper[4745]: I0127 12:18:12.650465 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 12:18:13 crc kubenswrapper[4745]: I0127 12:18:13.005625 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 12:18:13 crc kubenswrapper[4745]: I0127 12:18:13.885981 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 12:18:14 crc kubenswrapper[4745]: I0127 12:18:14.088410 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 12:18:14 crc kubenswrapper[4745]: I0127 12:18:14.433432 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 12:18:14 crc kubenswrapper[4745]: I0127 12:18:14.634284 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 12:18:15 crc kubenswrapper[4745]: I0127 12:18:15.591166 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.231289 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8c895b4dc-b2htf"] Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.231734 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.240800 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.241477 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.247779 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878cbb874-m826x"] Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.247925 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.255449 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.384942 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c76d68ee-3702-45ec-9e6f-62d42520ce7d-client-ca\") pod \"c76d68ee-3702-45ec-9e6f-62d42520ce7d\" (UID: \"c76d68ee-3702-45ec-9e6f-62d42520ce7d\") " Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.385012 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e3273e6-4970-4fae-915c-8333f3c91d3f-serving-cert\") pod \"7e3273e6-4970-4fae-915c-8333f3c91d3f\" (UID: \"7e3273e6-4970-4fae-915c-8333f3c91d3f\") " Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.385033 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e3273e6-4970-4fae-915c-8333f3c91d3f-config\") pod \"7e3273e6-4970-4fae-915c-8333f3c91d3f\" (UID: \"7e3273e6-4970-4fae-915c-8333f3c91d3f\") " Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.385094 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4v2\" (UniqueName: \"kubernetes.io/projected/c76d68ee-3702-45ec-9e6f-62d42520ce7d-kube-api-access-qs4v2\") pod \"c76d68ee-3702-45ec-9e6f-62d42520ce7d\" (UID: \"c76d68ee-3702-45ec-9e6f-62d42520ce7d\") " Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.385159 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqfxs\" (UniqueName: \"kubernetes.io/projected/7e3273e6-4970-4fae-915c-8333f3c91d3f-kube-api-access-vqfxs\") pod \"7e3273e6-4970-4fae-915c-8333f3c91d3f\" (UID: \"7e3273e6-4970-4fae-915c-8333f3c91d3f\") " Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.385188 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c76d68ee-3702-45ec-9e6f-62d42520ce7d-serving-cert\") pod \"c76d68ee-3702-45ec-9e6f-62d42520ce7d\" (UID: \"c76d68ee-3702-45ec-9e6f-62d42520ce7d\") " Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.385216 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e3273e6-4970-4fae-915c-8333f3c91d3f-proxy-ca-bundles\") pod \"7e3273e6-4970-4fae-915c-8333f3c91d3f\" (UID: \"7e3273e6-4970-4fae-915c-8333f3c91d3f\") " Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.385271 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c76d68ee-3702-45ec-9e6f-62d42520ce7d-config\") pod \"c76d68ee-3702-45ec-9e6f-62d42520ce7d\" (UID: \"c76d68ee-3702-45ec-9e6f-62d42520ce7d\") " Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.385290 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e3273e6-4970-4fae-915c-8333f3c91d3f-client-ca\") pod \"7e3273e6-4970-4fae-915c-8333f3c91d3f\" (UID: \"7e3273e6-4970-4fae-915c-8333f3c91d3f\") " Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.385397 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c76d68ee-3702-45ec-9e6f-62d42520ce7d-client-ca" (OuterVolumeSpecName: "client-ca") pod "c76d68ee-3702-45ec-9e6f-62d42520ce7d" (UID: "c76d68ee-3702-45ec-9e6f-62d42520ce7d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.385686 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c76d68ee-3702-45ec-9e6f-62d42520ce7d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.385798 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e3273e6-4970-4fae-915c-8333f3c91d3f-client-ca" (OuterVolumeSpecName: "client-ca") pod "7e3273e6-4970-4fae-915c-8333f3c91d3f" (UID: "7e3273e6-4970-4fae-915c-8333f3c91d3f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.386147 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e3273e6-4970-4fae-915c-8333f3c91d3f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7e3273e6-4970-4fae-915c-8333f3c91d3f" (UID: "7e3273e6-4970-4fae-915c-8333f3c91d3f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.386459 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c76d68ee-3702-45ec-9e6f-62d42520ce7d-config" (OuterVolumeSpecName: "config") pod "c76d68ee-3702-45ec-9e6f-62d42520ce7d" (UID: "c76d68ee-3702-45ec-9e6f-62d42520ce7d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.386615 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e3273e6-4970-4fae-915c-8333f3c91d3f-config" (OuterVolumeSpecName: "config") pod "7e3273e6-4970-4fae-915c-8333f3c91d3f" (UID: "7e3273e6-4970-4fae-915c-8333f3c91d3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.391828 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c76d68ee-3702-45ec-9e6f-62d42520ce7d-kube-api-access-qs4v2" (OuterVolumeSpecName: "kube-api-access-qs4v2") pod "c76d68ee-3702-45ec-9e6f-62d42520ce7d" (UID: "c76d68ee-3702-45ec-9e6f-62d42520ce7d"). InnerVolumeSpecName "kube-api-access-qs4v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.392508 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c76d68ee-3702-45ec-9e6f-62d42520ce7d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c76d68ee-3702-45ec-9e6f-62d42520ce7d" (UID: "c76d68ee-3702-45ec-9e6f-62d42520ce7d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.393384 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e3273e6-4970-4fae-915c-8333f3c91d3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7e3273e6-4970-4fae-915c-8333f3c91d3f" (UID: "7e3273e6-4970-4fae-915c-8333f3c91d3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.393614 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e3273e6-4970-4fae-915c-8333f3c91d3f-kube-api-access-vqfxs" (OuterVolumeSpecName: "kube-api-access-vqfxs") pod "7e3273e6-4970-4fae-915c-8333f3c91d3f" (UID: "7e3273e6-4970-4fae-915c-8333f3c91d3f"). InnerVolumeSpecName "kube-api-access-vqfxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.486631 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e3273e6-4970-4fae-915c-8333f3c91d3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.486690 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e3273e6-4970-4fae-915c-8333f3c91d3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.486705 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4v2\" (UniqueName: \"kubernetes.io/projected/c76d68ee-3702-45ec-9e6f-62d42520ce7d-kube-api-access-qs4v2\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.486720 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqfxs\" (UniqueName: \"kubernetes.io/projected/7e3273e6-4970-4fae-915c-8333f3c91d3f-kube-api-access-vqfxs\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.486732 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c76d68ee-3702-45ec-9e6f-62d42520ce7d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.486743 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e3273e6-4970-4fae-915c-8333f3c91d3f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.486755 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c76d68ee-3702-45ec-9e6f-62d42520ce7d-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:16 crc kubenswrapper[4745]: I0127 12:18:16.486765 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e3273e6-4970-4fae-915c-8333f3c91d3f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.216974 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-878cbb874-m826x" Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.216979 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8c895b4dc-b2htf" Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.256312 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8c895b4dc-b2htf"] Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.261653 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8c895b4dc-b2htf"] Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.293532 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878cbb874-m826x"] Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.297382 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878cbb874-m826x"] Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.834644 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s"] Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.835658 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.837748 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv"] Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.838211 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.842124 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.842171 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.842191 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.842394 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.842427 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.842630 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.842974 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.843340 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.844055 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.844077 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.844084 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.844092 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.852520 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.854370 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv"] Jan 27 12:18:17 crc kubenswrapper[4745]: I0127 12:18:17.858273 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s"] Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.004518 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da52d737-f085-44b4-9b81-426036fa2c71-config\") pod \"controller-manager-ccbcb74d6-tcz6s\" (UID: \"da52d737-f085-44b4-9b81-426036fa2c71\") " pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.004563 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da52d737-f085-44b4-9b81-426036fa2c71-serving-cert\") pod \"controller-manager-ccbcb74d6-tcz6s\" (UID: \"da52d737-f085-44b4-9b81-426036fa2c71\") " pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.004601 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/199d7081-f3b7-4133-a87a-eaae89139a51-serving-cert\") pod \"route-controller-manager-b46b59686-jdgdv\" (UID: \"199d7081-f3b7-4133-a87a-eaae89139a51\") " pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.004622 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/199d7081-f3b7-4133-a87a-eaae89139a51-config\") pod \"route-controller-manager-b46b59686-jdgdv\" (UID: \"199d7081-f3b7-4133-a87a-eaae89139a51\") " pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.005023 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfksv\" (UniqueName: \"kubernetes.io/projected/da52d737-f085-44b4-9b81-426036fa2c71-kube-api-access-zfksv\") pod \"controller-manager-ccbcb74d6-tcz6s\" (UID: \"da52d737-f085-44b4-9b81-426036fa2c71\") " pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.005128 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/199d7081-f3b7-4133-a87a-eaae89139a51-client-ca\") pod \"route-controller-manager-b46b59686-jdgdv\" (UID: \"199d7081-f3b7-4133-a87a-eaae89139a51\") " pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.005186 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cngdh\" (UniqueName: \"kubernetes.io/projected/199d7081-f3b7-4133-a87a-eaae89139a51-kube-api-access-cngdh\") pod \"route-controller-manager-b46b59686-jdgdv\" (UID: \"199d7081-f3b7-4133-a87a-eaae89139a51\") " pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.005283 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da52d737-f085-44b4-9b81-426036fa2c71-client-ca\") pod \"controller-manager-ccbcb74d6-tcz6s\" (UID: \"da52d737-f085-44b4-9b81-426036fa2c71\") " pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.005334 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da52d737-f085-44b4-9b81-426036fa2c71-proxy-ca-bundles\") pod \"controller-manager-ccbcb74d6-tcz6s\" (UID: \"da52d737-f085-44b4-9b81-426036fa2c71\") " pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.080595 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e3273e6-4970-4fae-915c-8333f3c91d3f" path="/var/lib/kubelet/pods/7e3273e6-4970-4fae-915c-8333f3c91d3f/volumes" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.081252 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c76d68ee-3702-45ec-9e6f-62d42520ce7d" path="/var/lib/kubelet/pods/c76d68ee-3702-45ec-9e6f-62d42520ce7d/volumes" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.106647 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/199d7081-f3b7-4133-a87a-eaae89139a51-client-ca\") pod \"route-controller-manager-b46b59686-jdgdv\" (UID: \"199d7081-f3b7-4133-a87a-eaae89139a51\") " pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.106720 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cngdh\" (UniqueName: \"kubernetes.io/projected/199d7081-f3b7-4133-a87a-eaae89139a51-kube-api-access-cngdh\") pod \"route-controller-manager-b46b59686-jdgdv\" (UID: \"199d7081-f3b7-4133-a87a-eaae89139a51\") " pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.106758 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da52d737-f085-44b4-9b81-426036fa2c71-client-ca\") pod \"controller-manager-ccbcb74d6-tcz6s\" (UID: \"da52d737-f085-44b4-9b81-426036fa2c71\") " pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.106791 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da52d737-f085-44b4-9b81-426036fa2c71-proxy-ca-bundles\") pod \"controller-manager-ccbcb74d6-tcz6s\" (UID: \"da52d737-f085-44b4-9b81-426036fa2c71\") " pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.106848 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da52d737-f085-44b4-9b81-426036fa2c71-config\") pod \"controller-manager-ccbcb74d6-tcz6s\" (UID: \"da52d737-f085-44b4-9b81-426036fa2c71\") " pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.106863 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da52d737-f085-44b4-9b81-426036fa2c71-serving-cert\") pod \"controller-manager-ccbcb74d6-tcz6s\" (UID: \"da52d737-f085-44b4-9b81-426036fa2c71\") " pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.106882 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/199d7081-f3b7-4133-a87a-eaae89139a51-serving-cert\") pod \"route-controller-manager-b46b59686-jdgdv\" (UID: \"199d7081-f3b7-4133-a87a-eaae89139a51\") " pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.106897 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/199d7081-f3b7-4133-a87a-eaae89139a51-config\") pod \"route-controller-manager-b46b59686-jdgdv\" (UID: \"199d7081-f3b7-4133-a87a-eaae89139a51\") " pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.106916 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfksv\" (UniqueName: \"kubernetes.io/projected/da52d737-f085-44b4-9b81-426036fa2c71-kube-api-access-zfksv\") pod \"controller-manager-ccbcb74d6-tcz6s\" (UID: \"da52d737-f085-44b4-9b81-426036fa2c71\") " pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.108202 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/199d7081-f3b7-4133-a87a-eaae89139a51-client-ca\") pod \"route-controller-manager-b46b59686-jdgdv\" (UID: \"199d7081-f3b7-4133-a87a-eaae89139a51\") " pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.109038 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da52d737-f085-44b4-9b81-426036fa2c71-client-ca\") pod \"controller-manager-ccbcb74d6-tcz6s\" (UID: \"da52d737-f085-44b4-9b81-426036fa2c71\") " pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.109831 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da52d737-f085-44b4-9b81-426036fa2c71-proxy-ca-bundles\") pod \"controller-manager-ccbcb74d6-tcz6s\" (UID: \"da52d737-f085-44b4-9b81-426036fa2c71\") " pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.110782 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da52d737-f085-44b4-9b81-426036fa2c71-config\") pod \"controller-manager-ccbcb74d6-tcz6s\" (UID: \"da52d737-f085-44b4-9b81-426036fa2c71\") " pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.114365 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/199d7081-f3b7-4133-a87a-eaae89139a51-config\") pod \"route-controller-manager-b46b59686-jdgdv\" (UID: \"199d7081-f3b7-4133-a87a-eaae89139a51\") " pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.119373 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/199d7081-f3b7-4133-a87a-eaae89139a51-serving-cert\") pod \"route-controller-manager-b46b59686-jdgdv\" (UID: \"199d7081-f3b7-4133-a87a-eaae89139a51\") " pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.122609 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da52d737-f085-44b4-9b81-426036fa2c71-serving-cert\") pod \"controller-manager-ccbcb74d6-tcz6s\" (UID: \"da52d737-f085-44b4-9b81-426036fa2c71\") " pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.126205 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cngdh\" (UniqueName: \"kubernetes.io/projected/199d7081-f3b7-4133-a87a-eaae89139a51-kube-api-access-cngdh\") pod \"route-controller-manager-b46b59686-jdgdv\" (UID: \"199d7081-f3b7-4133-a87a-eaae89139a51\") " pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.129605 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfksv\" (UniqueName: \"kubernetes.io/projected/da52d737-f085-44b4-9b81-426036fa2c71-kube-api-access-zfksv\") pod \"controller-manager-ccbcb74d6-tcz6s\" (UID: \"da52d737-f085-44b4-9b81-426036fa2c71\") " pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.158059 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.176413 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.374934 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s"] Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.439087 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv"] Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.716939 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 12:18:18 crc kubenswrapper[4745]: I0127 12:18:18.901118 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 12:18:19 crc kubenswrapper[4745]: I0127 12:18:19.229290 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" event={"ID":"da52d737-f085-44b4-9b81-426036fa2c71","Type":"ContainerStarted","Data":"e7f86a3b610b166168a1cb3a0c85f5691dc852a02cad449ac9834dc24f3dc84b"} Jan 27 12:18:19 crc kubenswrapper[4745]: I0127 12:18:19.229389 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" event={"ID":"da52d737-f085-44b4-9b81-426036fa2c71","Type":"ContainerStarted","Data":"7e87ccbcbc5fdc593895304efb34e1b498d35740bef7e4dbfedc52485e4d8007"} Jan 27 12:18:19 crc kubenswrapper[4745]: I0127 12:18:19.229407 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" Jan 27 12:18:19 crc kubenswrapper[4745]: I0127 12:18:19.229530 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" event={"ID":"199d7081-f3b7-4133-a87a-eaae89139a51","Type":"ContainerStarted","Data":"f177a8541300a41ff81df1038d7c0637de5c58a604eb10bd8a29d7f65a15a4c0"} Jan 27 12:18:19 crc kubenswrapper[4745]: I0127 12:18:19.229575 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" event={"ID":"199d7081-f3b7-4133-a87a-eaae89139a51","Type":"ContainerStarted","Data":"b5f2c892a3d46f7d39306bb596d7386d30b43835b1d9c3d0fe0d57edb3e186ca"} Jan 27 12:18:19 crc kubenswrapper[4745]: I0127 12:18:19.235512 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" Jan 27 12:18:19 crc kubenswrapper[4745]: I0127 12:18:19.247921 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" podStartSLOduration=3.247899042 podStartE2EDuration="3.247899042s" podCreationTimestamp="2026-01-27 12:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:18:19.243843575 +0000 UTC m=+392.048754273" watchObservedRunningTime="2026-01-27 12:18:19.247899042 +0000 UTC m=+392.052809740" Jan 27 12:18:19 crc kubenswrapper[4745]: I0127 12:18:19.866937 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 12:18:20 crc kubenswrapper[4745]: I0127 12:18:20.235875 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" Jan 27 12:18:20 crc kubenswrapper[4745]: I0127 12:18:20.245208 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" Jan 27 12:18:20 crc kubenswrapper[4745]: I0127 12:18:20.256352 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" podStartSLOduration=4.256326477 podStartE2EDuration="4.256326477s" podCreationTimestamp="2026-01-27 12:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:18:20.255798531 +0000 UTC m=+393.060709249" watchObservedRunningTime="2026-01-27 12:18:20.256326477 +0000 UTC m=+393.061237175" Jan 27 12:18:20 crc kubenswrapper[4745]: I0127 12:18:20.788529 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 12:18:35 crc kubenswrapper[4745]: I0127 12:18:35.967601 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:18:35 crc kubenswrapper[4745]: I0127 12:18:35.968373 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:18:36 crc kubenswrapper[4745]: I0127 12:18:36.231429 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s"] Jan 27 12:18:36 crc kubenswrapper[4745]: I0127 12:18:36.231714 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" podUID="da52d737-f085-44b4-9b81-426036fa2c71" containerName="controller-manager" containerID="cri-o://e7f86a3b610b166168a1cb3a0c85f5691dc852a02cad449ac9834dc24f3dc84b" gracePeriod=30 Jan 27 12:18:36 crc kubenswrapper[4745]: I0127 12:18:36.336716 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv"] Jan 27 12:18:36 crc kubenswrapper[4745]: I0127 12:18:36.337034 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" podUID="199d7081-f3b7-4133-a87a-eaae89139a51" containerName="route-controller-manager" containerID="cri-o://f177a8541300a41ff81df1038d7c0637de5c58a604eb10bd8a29d7f65a15a4c0" gracePeriod=30 Jan 27 12:18:37 crc kubenswrapper[4745]: I0127 12:18:37.328226 4745 generic.go:334] "Generic (PLEG): container finished" podID="199d7081-f3b7-4133-a87a-eaae89139a51" containerID="f177a8541300a41ff81df1038d7c0637de5c58a604eb10bd8a29d7f65a15a4c0" exitCode=0 Jan 27 12:18:37 crc kubenswrapper[4745]: I0127 12:18:37.328357 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" event={"ID":"199d7081-f3b7-4133-a87a-eaae89139a51","Type":"ContainerDied","Data":"f177a8541300a41ff81df1038d7c0637de5c58a604eb10bd8a29d7f65a15a4c0"} Jan 27 12:18:37 crc kubenswrapper[4745]: I0127 12:18:37.331157 4745 generic.go:334] "Generic (PLEG): container finished" podID="da52d737-f085-44b4-9b81-426036fa2c71" containerID="e7f86a3b610b166168a1cb3a0c85f5691dc852a02cad449ac9834dc24f3dc84b" exitCode=0 Jan 27 12:18:37 crc kubenswrapper[4745]: I0127 12:18:37.331225 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" event={"ID":"da52d737-f085-44b4-9b81-426036fa2c71","Type":"ContainerDied","Data":"e7f86a3b610b166168a1cb3a0c85f5691dc852a02cad449ac9834dc24f3dc84b"} Jan 27 12:18:37 crc kubenswrapper[4745]: I0127 12:18:37.932171 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" Jan 27 12:18:37 crc kubenswrapper[4745]: I0127 12:18:37.958517 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr"] Jan 27 12:18:37 crc kubenswrapper[4745]: E0127 12:18:37.958765 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="199d7081-f3b7-4133-a87a-eaae89139a51" containerName="route-controller-manager" Jan 27 12:18:37 crc kubenswrapper[4745]: I0127 12:18:37.958788 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="199d7081-f3b7-4133-a87a-eaae89139a51" containerName="route-controller-manager" Jan 27 12:18:37 crc kubenswrapper[4745]: I0127 12:18:37.958987 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="199d7081-f3b7-4133-a87a-eaae89139a51" containerName="route-controller-manager" Jan 27 12:18:37 crc kubenswrapper[4745]: I0127 12:18:37.959643 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" Jan 27 12:18:37 crc kubenswrapper[4745]: I0127 12:18:37.973770 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr"] Jan 27 12:18:37 crc kubenswrapper[4745]: I0127 12:18:37.976372 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/199d7081-f3b7-4133-a87a-eaae89139a51-config\") pod \"199d7081-f3b7-4133-a87a-eaae89139a51\" (UID: \"199d7081-f3b7-4133-a87a-eaae89139a51\") " Jan 27 12:18:37 crc kubenswrapper[4745]: I0127 12:18:37.976420 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/199d7081-f3b7-4133-a87a-eaae89139a51-client-ca\") pod \"199d7081-f3b7-4133-a87a-eaae89139a51\" (UID: \"199d7081-f3b7-4133-a87a-eaae89139a51\") " Jan 27 12:18:37 crc kubenswrapper[4745]: I0127 12:18:37.976455 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cngdh\" (UniqueName: \"kubernetes.io/projected/199d7081-f3b7-4133-a87a-eaae89139a51-kube-api-access-cngdh\") pod \"199d7081-f3b7-4133-a87a-eaae89139a51\" (UID: \"199d7081-f3b7-4133-a87a-eaae89139a51\") " Jan 27 12:18:37 crc kubenswrapper[4745]: I0127 12:18:37.976487 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/199d7081-f3b7-4133-a87a-eaae89139a51-serving-cert\") pod \"199d7081-f3b7-4133-a87a-eaae89139a51\" (UID: \"199d7081-f3b7-4133-a87a-eaae89139a51\") " Jan 27 12:18:37 crc kubenswrapper[4745]: I0127 12:18:37.978158 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/199d7081-f3b7-4133-a87a-eaae89139a51-client-ca" (OuterVolumeSpecName: "client-ca") pod "199d7081-f3b7-4133-a87a-eaae89139a51" (UID: "199d7081-f3b7-4133-a87a-eaae89139a51"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:18:37 crc kubenswrapper[4745]: I0127 12:18:37.978488 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/199d7081-f3b7-4133-a87a-eaae89139a51-config" (OuterVolumeSpecName: "config") pod "199d7081-f3b7-4133-a87a-eaae89139a51" (UID: "199d7081-f3b7-4133-a87a-eaae89139a51"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:18:37 crc kubenswrapper[4745]: I0127 12:18:37.983172 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/199d7081-f3b7-4133-a87a-eaae89139a51-kube-api-access-cngdh" (OuterVolumeSpecName: "kube-api-access-cngdh") pod "199d7081-f3b7-4133-a87a-eaae89139a51" (UID: "199d7081-f3b7-4133-a87a-eaae89139a51"). InnerVolumeSpecName "kube-api-access-cngdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:18:37 crc kubenswrapper[4745]: I0127 12:18:37.983391 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/199d7081-f3b7-4133-a87a-eaae89139a51-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "199d7081-f3b7-4133-a87a-eaae89139a51" (UID: "199d7081-f3b7-4133-a87a-eaae89139a51"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.001846 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.077073 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da52d737-f085-44b4-9b81-426036fa2c71-serving-cert\") pod \"da52d737-f085-44b4-9b81-426036fa2c71\" (UID: \"da52d737-f085-44b4-9b81-426036fa2c71\") " Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.077123 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da52d737-f085-44b4-9b81-426036fa2c71-client-ca\") pod \"da52d737-f085-44b4-9b81-426036fa2c71\" (UID: \"da52d737-f085-44b4-9b81-426036fa2c71\") " Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.077217 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfksv\" (UniqueName: \"kubernetes.io/projected/da52d737-f085-44b4-9b81-426036fa2c71-kube-api-access-zfksv\") pod \"da52d737-f085-44b4-9b81-426036fa2c71\" (UID: \"da52d737-f085-44b4-9b81-426036fa2c71\") " Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.077265 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da52d737-f085-44b4-9b81-426036fa2c71-config\") pod \"da52d737-f085-44b4-9b81-426036fa2c71\" (UID: \"da52d737-f085-44b4-9b81-426036fa2c71\") " Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.077280 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da52d737-f085-44b4-9b81-426036fa2c71-proxy-ca-bundles\") pod \"da52d737-f085-44b4-9b81-426036fa2c71\" (UID: \"da52d737-f085-44b4-9b81-426036fa2c71\") " Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.077422 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2ftc\" (UniqueName: \"kubernetes.io/projected/ef501b00-038c-4e04-a308-ca40d5a5effd-kube-api-access-p2ftc\") pod \"route-controller-manager-55f4f89974-kmcjr\" (UID: \"ef501b00-038c-4e04-a308-ca40d5a5effd\") " pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.077451 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef501b00-038c-4e04-a308-ca40d5a5effd-client-ca\") pod \"route-controller-manager-55f4f89974-kmcjr\" (UID: \"ef501b00-038c-4e04-a308-ca40d5a5effd\") " pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.077480 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef501b00-038c-4e04-a308-ca40d5a5effd-config\") pod \"route-controller-manager-55f4f89974-kmcjr\" (UID: \"ef501b00-038c-4e04-a308-ca40d5a5effd\") " pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.077532 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef501b00-038c-4e04-a308-ca40d5a5effd-serving-cert\") pod \"route-controller-manager-55f4f89974-kmcjr\" (UID: \"ef501b00-038c-4e04-a308-ca40d5a5effd\") " pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.077582 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/199d7081-f3b7-4133-a87a-eaae89139a51-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.077603 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/199d7081-f3b7-4133-a87a-eaae89139a51-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.077616 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cngdh\" (UniqueName: \"kubernetes.io/projected/199d7081-f3b7-4133-a87a-eaae89139a51-kube-api-access-cngdh\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.077627 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/199d7081-f3b7-4133-a87a-eaae89139a51-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.078427 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da52d737-f085-44b4-9b81-426036fa2c71-client-ca" (OuterVolumeSpecName: "client-ca") pod "da52d737-f085-44b4-9b81-426036fa2c71" (UID: "da52d737-f085-44b4-9b81-426036fa2c71"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.078477 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da52d737-f085-44b4-9b81-426036fa2c71-config" (OuterVolumeSpecName: "config") pod "da52d737-f085-44b4-9b81-426036fa2c71" (UID: "da52d737-f085-44b4-9b81-426036fa2c71"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.078526 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da52d737-f085-44b4-9b81-426036fa2c71-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "da52d737-f085-44b4-9b81-426036fa2c71" (UID: "da52d737-f085-44b4-9b81-426036fa2c71"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.080068 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da52d737-f085-44b4-9b81-426036fa2c71-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "da52d737-f085-44b4-9b81-426036fa2c71" (UID: "da52d737-f085-44b4-9b81-426036fa2c71"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.080566 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da52d737-f085-44b4-9b81-426036fa2c71-kube-api-access-zfksv" (OuterVolumeSpecName: "kube-api-access-zfksv") pod "da52d737-f085-44b4-9b81-426036fa2c71" (UID: "da52d737-f085-44b4-9b81-426036fa2c71"). InnerVolumeSpecName "kube-api-access-zfksv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.178621 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef501b00-038c-4e04-a308-ca40d5a5effd-serving-cert\") pod \"route-controller-manager-55f4f89974-kmcjr\" (UID: \"ef501b00-038c-4e04-a308-ca40d5a5effd\") " pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.178692 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2ftc\" (UniqueName: \"kubernetes.io/projected/ef501b00-038c-4e04-a308-ca40d5a5effd-kube-api-access-p2ftc\") pod \"route-controller-manager-55f4f89974-kmcjr\" (UID: \"ef501b00-038c-4e04-a308-ca40d5a5effd\") " pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.178737 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef501b00-038c-4e04-a308-ca40d5a5effd-client-ca\") pod \"route-controller-manager-55f4f89974-kmcjr\" (UID: \"ef501b00-038c-4e04-a308-ca40d5a5effd\") " pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.178788 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef501b00-038c-4e04-a308-ca40d5a5effd-config\") pod \"route-controller-manager-55f4f89974-kmcjr\" (UID: \"ef501b00-038c-4e04-a308-ca40d5a5effd\") " pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.178911 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da52d737-f085-44b4-9b81-426036fa2c71-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.178935 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da52d737-f085-44b4-9b81-426036fa2c71-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.178953 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da52d737-f085-44b4-9b81-426036fa2c71-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.178969 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da52d737-f085-44b4-9b81-426036fa2c71-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.178986 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfksv\" (UniqueName: \"kubernetes.io/projected/da52d737-f085-44b4-9b81-426036fa2c71-kube-api-access-zfksv\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.180516 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef501b00-038c-4e04-a308-ca40d5a5effd-config\") pod \"route-controller-manager-55f4f89974-kmcjr\" (UID: \"ef501b00-038c-4e04-a308-ca40d5a5effd\") " pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.182313 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef501b00-038c-4e04-a308-ca40d5a5effd-serving-cert\") pod \"route-controller-manager-55f4f89974-kmcjr\" (UID: \"ef501b00-038c-4e04-a308-ca40d5a5effd\") " pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.182679 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef501b00-038c-4e04-a308-ca40d5a5effd-client-ca\") pod \"route-controller-manager-55f4f89974-kmcjr\" (UID: \"ef501b00-038c-4e04-a308-ca40d5a5effd\") " pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.194897 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2ftc\" (UniqueName: \"kubernetes.io/projected/ef501b00-038c-4e04-a308-ca40d5a5effd-kube-api-access-p2ftc\") pod \"route-controller-manager-55f4f89974-kmcjr\" (UID: \"ef501b00-038c-4e04-a308-ca40d5a5effd\") " pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.312448 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.339116 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.339165 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s" event={"ID":"da52d737-f085-44b4-9b81-426036fa2c71","Type":"ContainerDied","Data":"7e87ccbcbc5fdc593895304efb34e1b498d35740bef7e4dbfedc52485e4d8007"} Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.339684 4745 scope.go:117] "RemoveContainer" containerID="e7f86a3b610b166168a1cb3a0c85f5691dc852a02cad449ac9834dc24f3dc84b" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.342317 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" event={"ID":"199d7081-f3b7-4133-a87a-eaae89139a51","Type":"ContainerDied","Data":"b5f2c892a3d46f7d39306bb596d7386d30b43835b1d9c3d0fe0d57edb3e186ca"} Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.342388 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.369274 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv"] Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.375447 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b46b59686-jdgdv"] Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.384121 4745 scope.go:117] "RemoveContainer" containerID="f177a8541300a41ff81df1038d7c0637de5c58a604eb10bd8a29d7f65a15a4c0" Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.387146 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s"] Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.390041 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-ccbcb74d6-tcz6s"] Jan 27 12:18:38 crc kubenswrapper[4745]: I0127 12:18:38.772334 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr"] Jan 27 12:18:38 crc kubenswrapper[4745]: W0127 12:18:38.782100 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef501b00_038c_4e04_a308_ca40d5a5effd.slice/crio-d1d17d53c7deb97f8b105b6faef534ff5c9e7b7381ac427fa13ec3c0d045d7e0 WatchSource:0}: Error finding container d1d17d53c7deb97f8b105b6faef534ff5c9e7b7381ac427fa13ec3c0d045d7e0: Status 404 returned error can't find the container with id d1d17d53c7deb97f8b105b6faef534ff5c9e7b7381ac427fa13ec3c0d045d7e0 Jan 27 12:18:39 crc kubenswrapper[4745]: I0127 12:18:39.355585 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" event={"ID":"ef501b00-038c-4e04-a308-ca40d5a5effd","Type":"ContainerStarted","Data":"46a3367bccc82d27412ecbe32a257c46dd11185c37978f0c544e344a96b28872"} Jan 27 12:18:39 crc kubenswrapper[4745]: I0127 12:18:39.355669 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" event={"ID":"ef501b00-038c-4e04-a308-ca40d5a5effd","Type":"ContainerStarted","Data":"d1d17d53c7deb97f8b105b6faef534ff5c9e7b7381ac427fa13ec3c0d045d7e0"} Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.079606 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="199d7081-f3b7-4133-a87a-eaae89139a51" path="/var/lib/kubelet/pods/199d7081-f3b7-4133-a87a-eaae89139a51/volumes" Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.080563 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da52d737-f085-44b4-9b81-426036fa2c71" path="/var/lib/kubelet/pods/da52d737-f085-44b4-9b81-426036fa2c71/volumes" Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.364394 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.386643 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.416665 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" podStartSLOduration=4.416643057 podStartE2EDuration="4.416643057s" podCreationTimestamp="2026-01-27 12:18:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:18:40.39769472 +0000 UTC m=+413.202605418" watchObservedRunningTime="2026-01-27 12:18:40.416643057 +0000 UTC m=+413.221553755" Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.854954 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-764dd48c6c-4rmcv"] Jan 27 12:18:40 crc kubenswrapper[4745]: E0127 12:18:40.855212 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da52d737-f085-44b4-9b81-426036fa2c71" containerName="controller-manager" Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.855227 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="da52d737-f085-44b4-9b81-426036fa2c71" containerName="controller-manager" Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.855369 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="da52d737-f085-44b4-9b81-426036fa2c71" containerName="controller-manager" Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.855843 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.858950 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.859540 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.859802 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.859853 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.859592 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.859742 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.869040 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.880405 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-764dd48c6c-4rmcv"] Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.913879 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d575213-22d9-4fbc-ac57-72a22b46c5fe-proxy-ca-bundles\") pod \"controller-manager-764dd48c6c-4rmcv\" (UID: \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\") " pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.914167 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d575213-22d9-4fbc-ac57-72a22b46c5fe-client-ca\") pod \"controller-manager-764dd48c6c-4rmcv\" (UID: \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\") " pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.914303 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxczq\" (UniqueName: \"kubernetes.io/projected/9d575213-22d9-4fbc-ac57-72a22b46c5fe-kube-api-access-qxczq\") pod \"controller-manager-764dd48c6c-4rmcv\" (UID: \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\") " pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.914594 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d575213-22d9-4fbc-ac57-72a22b46c5fe-serving-cert\") pod \"controller-manager-764dd48c6c-4rmcv\" (UID: \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\") " pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" Jan 27 12:18:40 crc kubenswrapper[4745]: I0127 12:18:40.914720 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d575213-22d9-4fbc-ac57-72a22b46c5fe-config\") pod \"controller-manager-764dd48c6c-4rmcv\" (UID: \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\") " pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" Jan 27 12:18:41 crc kubenswrapper[4745]: I0127 12:18:41.016147 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d575213-22d9-4fbc-ac57-72a22b46c5fe-client-ca\") pod \"controller-manager-764dd48c6c-4rmcv\" (UID: \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\") " pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" Jan 27 12:18:41 crc kubenswrapper[4745]: I0127 12:18:41.016189 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxczq\" (UniqueName: \"kubernetes.io/projected/9d575213-22d9-4fbc-ac57-72a22b46c5fe-kube-api-access-qxczq\") pod \"controller-manager-764dd48c6c-4rmcv\" (UID: \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\") " pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" Jan 27 12:18:41 crc kubenswrapper[4745]: I0127 12:18:41.016226 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d575213-22d9-4fbc-ac57-72a22b46c5fe-serving-cert\") pod \"controller-manager-764dd48c6c-4rmcv\" (UID: \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\") " pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" Jan 27 12:18:41 crc kubenswrapper[4745]: I0127 12:18:41.016254 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d575213-22d9-4fbc-ac57-72a22b46c5fe-config\") pod \"controller-manager-764dd48c6c-4rmcv\" (UID: \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\") " pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" Jan 27 12:18:41 crc kubenswrapper[4745]: I0127 12:18:41.016283 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d575213-22d9-4fbc-ac57-72a22b46c5fe-proxy-ca-bundles\") pod \"controller-manager-764dd48c6c-4rmcv\" (UID: \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\") " pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" Jan 27 12:18:41 crc kubenswrapper[4745]: I0127 12:18:41.017358 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d575213-22d9-4fbc-ac57-72a22b46c5fe-proxy-ca-bundles\") pod \"controller-manager-764dd48c6c-4rmcv\" (UID: \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\") " pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" Jan 27 12:18:41 crc kubenswrapper[4745]: I0127 12:18:41.018210 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d575213-22d9-4fbc-ac57-72a22b46c5fe-client-ca\") pod \"controller-manager-764dd48c6c-4rmcv\" (UID: \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\") " pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" Jan 27 12:18:41 crc kubenswrapper[4745]: I0127 12:18:41.018559 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d575213-22d9-4fbc-ac57-72a22b46c5fe-config\") pod \"controller-manager-764dd48c6c-4rmcv\" (UID: \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\") " pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" Jan 27 12:18:41 crc kubenswrapper[4745]: I0127 12:18:41.024162 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d575213-22d9-4fbc-ac57-72a22b46c5fe-serving-cert\") pod \"controller-manager-764dd48c6c-4rmcv\" (UID: \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\") " pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" Jan 27 12:18:41 crc kubenswrapper[4745]: I0127 12:18:41.034693 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxczq\" (UniqueName: \"kubernetes.io/projected/9d575213-22d9-4fbc-ac57-72a22b46c5fe-kube-api-access-qxczq\") pod \"controller-manager-764dd48c6c-4rmcv\" (UID: \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\") " pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" Jan 27 12:18:41 crc kubenswrapper[4745]: I0127 12:18:41.180206 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" Jan 27 12:18:41 crc kubenswrapper[4745]: I0127 12:18:41.589738 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-764dd48c6c-4rmcv"] Jan 27 12:18:42 crc kubenswrapper[4745]: I0127 12:18:42.375258 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" event={"ID":"9d575213-22d9-4fbc-ac57-72a22b46c5fe","Type":"ContainerStarted","Data":"7b398e59d5630be0b955242b5d7d5821bb42cb2894dc0dd19769b567ea3e0684"} Jan 27 12:18:42 crc kubenswrapper[4745]: I0127 12:18:42.375309 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" event={"ID":"9d575213-22d9-4fbc-ac57-72a22b46c5fe","Type":"ContainerStarted","Data":"9c8f23925437ef6acccc6ab56fdd50958926f4a913145fab2a7889f8ca01477c"} Jan 27 12:18:43 crc kubenswrapper[4745]: I0127 12:18:43.380079 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" Jan 27 12:18:43 crc kubenswrapper[4745]: I0127 12:18:43.384802 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" Jan 27 12:18:43 crc kubenswrapper[4745]: I0127 12:18:43.395101 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" podStartSLOduration=7.395076321 podStartE2EDuration="7.395076321s" podCreationTimestamp="2026-01-27 12:18:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:18:43.393182706 +0000 UTC m=+416.198093394" watchObservedRunningTime="2026-01-27 12:18:43.395076321 +0000 UTC m=+416.199987019" Jan 27 12:18:46 crc kubenswrapper[4745]: I0127 12:18:46.314262 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-255wr"] Jan 27 12:18:46 crc kubenswrapper[4745]: I0127 12:18:46.315011 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-255wr" podUID="64c43381-42e2-4e01-9559-70c3c56070ea" containerName="registry-server" containerID="cri-o://5001560d8ba03714a647addf077fe97b8b9d85a7595a322d09b822b0ae7693b0" gracePeriod=2 Jan 27 12:18:46 crc kubenswrapper[4745]: I0127 12:18:46.511255 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zhnbq"] Jan 27 12:18:46 crc kubenswrapper[4745]: I0127 12:18:46.511962 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zhnbq" podUID="d2b41701-5113-4970-8d93-157bf16b3c06" containerName="registry-server" containerID="cri-o://cd15586e1d10c05eef5d0049e00af975204f6a2fa477e171c9dd2d6a8af3e157" gracePeriod=2 Jan 27 12:18:47 crc kubenswrapper[4745]: I0127 12:18:47.414966 4745 generic.go:334] "Generic (PLEG): container finished" podID="d2b41701-5113-4970-8d93-157bf16b3c06" containerID="cd15586e1d10c05eef5d0049e00af975204f6a2fa477e171c9dd2d6a8af3e157" exitCode=0 Jan 27 12:18:47 crc kubenswrapper[4745]: I0127 12:18:47.415084 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhnbq" event={"ID":"d2b41701-5113-4970-8d93-157bf16b3c06","Type":"ContainerDied","Data":"cd15586e1d10c05eef5d0049e00af975204f6a2fa477e171c9dd2d6a8af3e157"} Jan 27 12:18:47 crc kubenswrapper[4745]: I0127 12:18:47.416783 4745 generic.go:334] "Generic (PLEG): container finished" podID="64c43381-42e2-4e01-9559-70c3c56070ea" containerID="5001560d8ba03714a647addf077fe97b8b9d85a7595a322d09b822b0ae7693b0" exitCode=0 Jan 27 12:18:47 crc kubenswrapper[4745]: I0127 12:18:47.416824 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-255wr" event={"ID":"64c43381-42e2-4e01-9559-70c3c56070ea","Type":"ContainerDied","Data":"5001560d8ba03714a647addf077fe97b8b9d85a7595a322d09b822b0ae7693b0"} Jan 27 12:18:47 crc kubenswrapper[4745]: I0127 12:18:47.453537 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhnbq" Jan 27 12:18:47 crc kubenswrapper[4745]: I0127 12:18:47.519166 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2b41701-5113-4970-8d93-157bf16b3c06-utilities\") pod \"d2b41701-5113-4970-8d93-157bf16b3c06\" (UID: \"d2b41701-5113-4970-8d93-157bf16b3c06\") " Jan 27 12:18:47 crc kubenswrapper[4745]: I0127 12:18:47.519624 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9vlg\" (UniqueName: \"kubernetes.io/projected/d2b41701-5113-4970-8d93-157bf16b3c06-kube-api-access-t9vlg\") pod \"d2b41701-5113-4970-8d93-157bf16b3c06\" (UID: \"d2b41701-5113-4970-8d93-157bf16b3c06\") " Jan 27 12:18:47 crc kubenswrapper[4745]: I0127 12:18:47.519677 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2b41701-5113-4970-8d93-157bf16b3c06-catalog-content\") pod \"d2b41701-5113-4970-8d93-157bf16b3c06\" (UID: \"d2b41701-5113-4970-8d93-157bf16b3c06\") " Jan 27 12:18:47 crc kubenswrapper[4745]: I0127 12:18:47.522140 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2b41701-5113-4970-8d93-157bf16b3c06-utilities" (OuterVolumeSpecName: "utilities") pod "d2b41701-5113-4970-8d93-157bf16b3c06" (UID: "d2b41701-5113-4970-8d93-157bf16b3c06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:18:47 crc kubenswrapper[4745]: I0127 12:18:47.525847 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2b41701-5113-4970-8d93-157bf16b3c06-kube-api-access-t9vlg" (OuterVolumeSpecName: "kube-api-access-t9vlg") pod "d2b41701-5113-4970-8d93-157bf16b3c06" (UID: "d2b41701-5113-4970-8d93-157bf16b3c06"). InnerVolumeSpecName "kube-api-access-t9vlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:18:47 crc kubenswrapper[4745]: I0127 12:18:47.575292 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2b41701-5113-4970-8d93-157bf16b3c06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2b41701-5113-4970-8d93-157bf16b3c06" (UID: "d2b41701-5113-4970-8d93-157bf16b3c06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:18:47 crc kubenswrapper[4745]: I0127 12:18:47.621375 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9vlg\" (UniqueName: \"kubernetes.io/projected/d2b41701-5113-4970-8d93-157bf16b3c06-kube-api-access-t9vlg\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:47 crc kubenswrapper[4745]: I0127 12:18:47.621411 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2b41701-5113-4970-8d93-157bf16b3c06-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:47 crc kubenswrapper[4745]: I0127 12:18:47.621420 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2b41701-5113-4970-8d93-157bf16b3c06-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.135612 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-255wr" Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.230978 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64c43381-42e2-4e01-9559-70c3c56070ea-catalog-content\") pod \"64c43381-42e2-4e01-9559-70c3c56070ea\" (UID: \"64c43381-42e2-4e01-9559-70c3c56070ea\") " Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.231422 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnv2n\" (UniqueName: \"kubernetes.io/projected/64c43381-42e2-4e01-9559-70c3c56070ea-kube-api-access-lnv2n\") pod \"64c43381-42e2-4e01-9559-70c3c56070ea\" (UID: \"64c43381-42e2-4e01-9559-70c3c56070ea\") " Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.231471 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64c43381-42e2-4e01-9559-70c3c56070ea-utilities\") pod \"64c43381-42e2-4e01-9559-70c3c56070ea\" (UID: \"64c43381-42e2-4e01-9559-70c3c56070ea\") " Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.232420 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64c43381-42e2-4e01-9559-70c3c56070ea-utilities" (OuterVolumeSpecName: "utilities") pod "64c43381-42e2-4e01-9559-70c3c56070ea" (UID: "64c43381-42e2-4e01-9559-70c3c56070ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.244045 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64c43381-42e2-4e01-9559-70c3c56070ea-kube-api-access-lnv2n" (OuterVolumeSpecName: "kube-api-access-lnv2n") pod "64c43381-42e2-4e01-9559-70c3c56070ea" (UID: "64c43381-42e2-4e01-9559-70c3c56070ea"). InnerVolumeSpecName "kube-api-access-lnv2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.276669 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64c43381-42e2-4e01-9559-70c3c56070ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64c43381-42e2-4e01-9559-70c3c56070ea" (UID: "64c43381-42e2-4e01-9559-70c3c56070ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.332587 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64c43381-42e2-4e01-9559-70c3c56070ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.332623 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnv2n\" (UniqueName: \"kubernetes.io/projected/64c43381-42e2-4e01-9559-70c3c56070ea-kube-api-access-lnv2n\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.332635 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64c43381-42e2-4e01-9559-70c3c56070ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.425785 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-255wr" event={"ID":"64c43381-42e2-4e01-9559-70c3c56070ea","Type":"ContainerDied","Data":"26ebc03ce5679d50a88b967ef99aa04bdd01f6b47a3944ab1a4e3907e0fe03a7"} Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.425862 4745 scope.go:117] "RemoveContainer" containerID="5001560d8ba03714a647addf077fe97b8b9d85a7595a322d09b822b0ae7693b0" Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.425971 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-255wr" Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.434177 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhnbq" event={"ID":"d2b41701-5113-4970-8d93-157bf16b3c06","Type":"ContainerDied","Data":"5bc90ef99a65d963df83582620079e3ec0a3650947f6c5ddf77d593a830d8946"} Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.434305 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhnbq" Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.456313 4745 scope.go:117] "RemoveContainer" containerID="4a2a69222a5f2ea8d3286144487e6f060f76c37177144f4d7b065471e07ec3ae" Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.469075 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zhnbq"] Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.481627 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zhnbq"] Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.487621 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-255wr"] Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.492268 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-255wr"] Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.497212 4745 scope.go:117] "RemoveContainer" containerID="af12074e02d034bfa4b98440c52d9a163f7c2a0063dfbc4bedb772a114b592f0" Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.523522 4745 scope.go:117] "RemoveContainer" containerID="cd15586e1d10c05eef5d0049e00af975204f6a2fa477e171c9dd2d6a8af3e157" Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.556691 4745 scope.go:117] "RemoveContainer" containerID="c9845d62431f87b97a415efa7aa9aefa2b825cd0cc760c633bd9da5bfe028a63" Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.573198 4745 scope.go:117] "RemoveContainer" containerID="fcbe507db2aa8230c620c38f4555203eb8e317d1f966e60a3c440bc9bb509a4a" Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.919779 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzw5q"] Jan 27 12:18:48 crc kubenswrapper[4745]: I0127 12:18:48.920287 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fzw5q" podUID="36154dea-ca68-4ca6-8e2f-83a669152ca7" containerName="registry-server" containerID="cri-o://779d76e62c771a713509ec5c5a3052c6b04e91e24d02a597d523df0699e690a0" gracePeriod=2 Jan 27 12:18:49 crc kubenswrapper[4745]: I0127 12:18:49.112299 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2c9wm"] Jan 27 12:18:49 crc kubenswrapper[4745]: I0127 12:18:49.112502 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2c9wm" podUID="3fcec544-9ef8-406d-9f01-b3ceabf2b033" containerName="registry-server" containerID="cri-o://4d7d03cea1849e923194b74bf1a85ac962a1c60eafcee366a949ff7005ab9c8a" gracePeriod=2 Jan 27 12:18:49 crc kubenswrapper[4745]: I0127 12:18:49.441674 4745 generic.go:334] "Generic (PLEG): container finished" podID="36154dea-ca68-4ca6-8e2f-83a669152ca7" containerID="779d76e62c771a713509ec5c5a3052c6b04e91e24d02a597d523df0699e690a0" exitCode=0 Jan 27 12:18:49 crc kubenswrapper[4745]: I0127 12:18:49.441764 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzw5q" event={"ID":"36154dea-ca68-4ca6-8e2f-83a669152ca7","Type":"ContainerDied","Data":"779d76e62c771a713509ec5c5a3052c6b04e91e24d02a597d523df0699e690a0"} Jan 27 12:18:49 crc kubenswrapper[4745]: I0127 12:18:49.443940 4745 generic.go:334] "Generic (PLEG): container finished" podID="3fcec544-9ef8-406d-9f01-b3ceabf2b033" containerID="4d7d03cea1849e923194b74bf1a85ac962a1c60eafcee366a949ff7005ab9c8a" exitCode=0 Jan 27 12:18:49 crc kubenswrapper[4745]: I0127 12:18:49.444014 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c9wm" event={"ID":"3fcec544-9ef8-406d-9f01-b3ceabf2b033","Type":"ContainerDied","Data":"4d7d03cea1849e923194b74bf1a85ac962a1c60eafcee366a949ff7005ab9c8a"} Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.063916 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzw5q" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.089743 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64c43381-42e2-4e01-9559-70c3c56070ea" path="/var/lib/kubelet/pods/64c43381-42e2-4e01-9559-70c3c56070ea/volumes" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.091029 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2b41701-5113-4970-8d93-157bf16b3c06" path="/var/lib/kubelet/pods/d2b41701-5113-4970-8d93-157bf16b3c06/volumes" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.155099 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw9nh\" (UniqueName: \"kubernetes.io/projected/36154dea-ca68-4ca6-8e2f-83a669152ca7-kube-api-access-dw9nh\") pod \"36154dea-ca68-4ca6-8e2f-83a669152ca7\" (UID: \"36154dea-ca68-4ca6-8e2f-83a669152ca7\") " Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.155940 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36154dea-ca68-4ca6-8e2f-83a669152ca7-catalog-content\") pod \"36154dea-ca68-4ca6-8e2f-83a669152ca7\" (UID: \"36154dea-ca68-4ca6-8e2f-83a669152ca7\") " Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.155986 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36154dea-ca68-4ca6-8e2f-83a669152ca7-utilities\") pod \"36154dea-ca68-4ca6-8e2f-83a669152ca7\" (UID: \"36154dea-ca68-4ca6-8e2f-83a669152ca7\") " Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.156835 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36154dea-ca68-4ca6-8e2f-83a669152ca7-utilities" (OuterVolumeSpecName: "utilities") pod "36154dea-ca68-4ca6-8e2f-83a669152ca7" (UID: "36154dea-ca68-4ca6-8e2f-83a669152ca7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.176699 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36154dea-ca68-4ca6-8e2f-83a669152ca7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36154dea-ca68-4ca6-8e2f-83a669152ca7" (UID: "36154dea-ca68-4ca6-8e2f-83a669152ca7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.244687 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36154dea-ca68-4ca6-8e2f-83a669152ca7-kube-api-access-dw9nh" (OuterVolumeSpecName: "kube-api-access-dw9nh") pod "36154dea-ca68-4ca6-8e2f-83a669152ca7" (UID: "36154dea-ca68-4ca6-8e2f-83a669152ca7"). InnerVolumeSpecName "kube-api-access-dw9nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.258299 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw9nh\" (UniqueName: \"kubernetes.io/projected/36154dea-ca68-4ca6-8e2f-83a669152ca7-kube-api-access-dw9nh\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.258332 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36154dea-ca68-4ca6-8e2f-83a669152ca7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.258341 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36154dea-ca68-4ca6-8e2f-83a669152ca7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.290357 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2c9wm" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.359753 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vkw5\" (UniqueName: \"kubernetes.io/projected/3fcec544-9ef8-406d-9f01-b3ceabf2b033-kube-api-access-5vkw5\") pod \"3fcec544-9ef8-406d-9f01-b3ceabf2b033\" (UID: \"3fcec544-9ef8-406d-9f01-b3ceabf2b033\") " Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.359852 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fcec544-9ef8-406d-9f01-b3ceabf2b033-utilities\") pod \"3fcec544-9ef8-406d-9f01-b3ceabf2b033\" (UID: \"3fcec544-9ef8-406d-9f01-b3ceabf2b033\") " Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.359975 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fcec544-9ef8-406d-9f01-b3ceabf2b033-catalog-content\") pod \"3fcec544-9ef8-406d-9f01-b3ceabf2b033\" (UID: \"3fcec544-9ef8-406d-9f01-b3ceabf2b033\") " Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.360693 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fcec544-9ef8-406d-9f01-b3ceabf2b033-utilities" (OuterVolumeSpecName: "utilities") pod "3fcec544-9ef8-406d-9f01-b3ceabf2b033" (UID: "3fcec544-9ef8-406d-9f01-b3ceabf2b033"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.363082 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fcec544-9ef8-406d-9f01-b3ceabf2b033-kube-api-access-5vkw5" (OuterVolumeSpecName: "kube-api-access-5vkw5") pod "3fcec544-9ef8-406d-9f01-b3ceabf2b033" (UID: "3fcec544-9ef8-406d-9f01-b3ceabf2b033"). InnerVolumeSpecName "kube-api-access-5vkw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.369473 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vkw5\" (UniqueName: \"kubernetes.io/projected/3fcec544-9ef8-406d-9f01-b3ceabf2b033-kube-api-access-5vkw5\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.369510 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fcec544-9ef8-406d-9f01-b3ceabf2b033-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.453105 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzw5q" event={"ID":"36154dea-ca68-4ca6-8e2f-83a669152ca7","Type":"ContainerDied","Data":"1c75497f7121f7674d8cec25a04a66b97409f90bb48a3fe99ff45e1cab5cc649"} Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.453131 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzw5q" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.453165 4745 scope.go:117] "RemoveContainer" containerID="779d76e62c771a713509ec5c5a3052c6b04e91e24d02a597d523df0699e690a0" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.457211 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c9wm" event={"ID":"3fcec544-9ef8-406d-9f01-b3ceabf2b033","Type":"ContainerDied","Data":"d7817b625e2c1db57be561c5ebd912730f6f4a2035bb8ebfb6bbc059c7d90a83"} Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.457302 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2c9wm" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.467399 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fcec544-9ef8-406d-9f01-b3ceabf2b033-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3fcec544-9ef8-406d-9f01-b3ceabf2b033" (UID: "3fcec544-9ef8-406d-9f01-b3ceabf2b033"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.470776 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fcec544-9ef8-406d-9f01-b3ceabf2b033-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.470826 4745 scope.go:117] "RemoveContainer" containerID="1c663205a9fc19b47eac1baa7e513608ebbc5725da351e4f4bdcef26baa21223" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.482731 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzw5q"] Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.486286 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzw5q"] Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.493943 4745 scope.go:117] "RemoveContainer" containerID="700ca73d62008821a7de19a80dd7da7992a9973cda13b3baa939ec0253ac71ac" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.522019 4745 scope.go:117] "RemoveContainer" containerID="4d7d03cea1849e923194b74bf1a85ac962a1c60eafcee366a949ff7005ab9c8a" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.536052 4745 scope.go:117] "RemoveContainer" containerID="63a54136d8fcb6b44e19c902c273c70c867dd941be0ee34492d3074359537ab4" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.551294 4745 scope.go:117] "RemoveContainer" containerID="cffd45185cc0d7c532f64293bda76d21e9e6dbf69c51d75a9634c56ed140a06e" Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.795272 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2c9wm"] Jan 27 12:18:50 crc kubenswrapper[4745]: I0127 12:18:50.799856 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2c9wm"] Jan 27 12:18:52 crc kubenswrapper[4745]: I0127 12:18:52.084119 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36154dea-ca68-4ca6-8e2f-83a669152ca7" path="/var/lib/kubelet/pods/36154dea-ca68-4ca6-8e2f-83a669152ca7/volumes" Jan 27 12:18:52 crc kubenswrapper[4745]: I0127 12:18:52.085514 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fcec544-9ef8-406d-9f01-b3ceabf2b033" path="/var/lib/kubelet/pods/3fcec544-9ef8-406d-9f01-b3ceabf2b033/volumes" Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.218858 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-764dd48c6c-4rmcv"] Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.219786 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" podUID="9d575213-22d9-4fbc-ac57-72a22b46c5fe" containerName="controller-manager" containerID="cri-o://7b398e59d5630be0b955242b5d7d5821bb42cb2894dc0dd19769b567ea3e0684" gracePeriod=30 Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.232389 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr"] Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.232623 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" podUID="ef501b00-038c-4e04-a308-ca40d5a5effd" containerName="route-controller-manager" containerID="cri-o://46a3367bccc82d27412ecbe32a257c46dd11185c37978f0c544e344a96b28872" gracePeriod=30 Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.493196 4745 generic.go:334] "Generic (PLEG): container finished" podID="9d575213-22d9-4fbc-ac57-72a22b46c5fe" containerID="7b398e59d5630be0b955242b5d7d5821bb42cb2894dc0dd19769b567ea3e0684" exitCode=0 Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.493420 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" event={"ID":"9d575213-22d9-4fbc-ac57-72a22b46c5fe","Type":"ContainerDied","Data":"7b398e59d5630be0b955242b5d7d5821bb42cb2894dc0dd19769b567ea3e0684"} Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.496259 4745 generic.go:334] "Generic (PLEG): container finished" podID="ef501b00-038c-4e04-a308-ca40d5a5effd" containerID="46a3367bccc82d27412ecbe32a257c46dd11185c37978f0c544e344a96b28872" exitCode=0 Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.496299 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" event={"ID":"ef501b00-038c-4e04-a308-ca40d5a5effd","Type":"ContainerDied","Data":"46a3367bccc82d27412ecbe32a257c46dd11185c37978f0c544e344a96b28872"} Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.796484 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.851845 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.953998 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d575213-22d9-4fbc-ac57-72a22b46c5fe-client-ca\") pod \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\" (UID: \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\") " Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.954070 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef501b00-038c-4e04-a308-ca40d5a5effd-config\") pod \"ef501b00-038c-4e04-a308-ca40d5a5effd\" (UID: \"ef501b00-038c-4e04-a308-ca40d5a5effd\") " Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.954115 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d575213-22d9-4fbc-ac57-72a22b46c5fe-config\") pod \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\" (UID: \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\") " Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.954163 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2ftc\" (UniqueName: \"kubernetes.io/projected/ef501b00-038c-4e04-a308-ca40d5a5effd-kube-api-access-p2ftc\") pod \"ef501b00-038c-4e04-a308-ca40d5a5effd\" (UID: \"ef501b00-038c-4e04-a308-ca40d5a5effd\") " Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.954233 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxczq\" (UniqueName: \"kubernetes.io/projected/9d575213-22d9-4fbc-ac57-72a22b46c5fe-kube-api-access-qxczq\") pod \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\" (UID: \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\") " Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.954259 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef501b00-038c-4e04-a308-ca40d5a5effd-client-ca\") pod \"ef501b00-038c-4e04-a308-ca40d5a5effd\" (UID: \"ef501b00-038c-4e04-a308-ca40d5a5effd\") " Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.954356 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef501b00-038c-4e04-a308-ca40d5a5effd-serving-cert\") pod \"ef501b00-038c-4e04-a308-ca40d5a5effd\" (UID: \"ef501b00-038c-4e04-a308-ca40d5a5effd\") " Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.955205 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef501b00-038c-4e04-a308-ca40d5a5effd-config" (OuterVolumeSpecName: "config") pod "ef501b00-038c-4e04-a308-ca40d5a5effd" (UID: "ef501b00-038c-4e04-a308-ca40d5a5effd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.955251 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d575213-22d9-4fbc-ac57-72a22b46c5fe-config" (OuterVolumeSpecName: "config") pod "9d575213-22d9-4fbc-ac57-72a22b46c5fe" (UID: "9d575213-22d9-4fbc-ac57-72a22b46c5fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.955264 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef501b00-038c-4e04-a308-ca40d5a5effd-client-ca" (OuterVolumeSpecName: "client-ca") pod "ef501b00-038c-4e04-a308-ca40d5a5effd" (UID: "ef501b00-038c-4e04-a308-ca40d5a5effd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.955361 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d575213-22d9-4fbc-ac57-72a22b46c5fe-proxy-ca-bundles\") pod \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\" (UID: \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\") " Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.955391 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d575213-22d9-4fbc-ac57-72a22b46c5fe-serving-cert\") pod \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\" (UID: \"9d575213-22d9-4fbc-ac57-72a22b46c5fe\") " Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.955379 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d575213-22d9-4fbc-ac57-72a22b46c5fe-client-ca" (OuterVolumeSpecName: "client-ca") pod "9d575213-22d9-4fbc-ac57-72a22b46c5fe" (UID: "9d575213-22d9-4fbc-ac57-72a22b46c5fe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.955893 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d575213-22d9-4fbc-ac57-72a22b46c5fe-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9d575213-22d9-4fbc-ac57-72a22b46c5fe" (UID: "9d575213-22d9-4fbc-ac57-72a22b46c5fe"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.956228 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d575213-22d9-4fbc-ac57-72a22b46c5fe-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.956248 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d575213-22d9-4fbc-ac57-72a22b46c5fe-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.956260 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef501b00-038c-4e04-a308-ca40d5a5effd-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.956272 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d575213-22d9-4fbc-ac57-72a22b46c5fe-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.956283 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef501b00-038c-4e04-a308-ca40d5a5effd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.961121 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef501b00-038c-4e04-a308-ca40d5a5effd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ef501b00-038c-4e04-a308-ca40d5a5effd" (UID: "ef501b00-038c-4e04-a308-ca40d5a5effd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.961259 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef501b00-038c-4e04-a308-ca40d5a5effd-kube-api-access-p2ftc" (OuterVolumeSpecName: "kube-api-access-p2ftc") pod "ef501b00-038c-4e04-a308-ca40d5a5effd" (UID: "ef501b00-038c-4e04-a308-ca40d5a5effd"). InnerVolumeSpecName "kube-api-access-p2ftc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.962033 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d575213-22d9-4fbc-ac57-72a22b46c5fe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d575213-22d9-4fbc-ac57-72a22b46c5fe" (UID: "9d575213-22d9-4fbc-ac57-72a22b46c5fe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:18:56 crc kubenswrapper[4745]: I0127 12:18:56.962866 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d575213-22d9-4fbc-ac57-72a22b46c5fe-kube-api-access-qxczq" (OuterVolumeSpecName: "kube-api-access-qxczq") pod "9d575213-22d9-4fbc-ac57-72a22b46c5fe" (UID: "9d575213-22d9-4fbc-ac57-72a22b46c5fe"). InnerVolumeSpecName "kube-api-access-qxczq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.057669 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2ftc\" (UniqueName: \"kubernetes.io/projected/ef501b00-038c-4e04-a308-ca40d5a5effd-kube-api-access-p2ftc\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.057735 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxczq\" (UniqueName: \"kubernetes.io/projected/9d575213-22d9-4fbc-ac57-72a22b46c5fe-kube-api-access-qxczq\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.057750 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef501b00-038c-4e04-a308-ca40d5a5effd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.057763 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d575213-22d9-4fbc-ac57-72a22b46c5fe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.502308 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" event={"ID":"ef501b00-038c-4e04-a308-ca40d5a5effd","Type":"ContainerDied","Data":"d1d17d53c7deb97f8b105b6faef534ff5c9e7b7381ac427fa13ec3c0d045d7e0"} Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.502341 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.502382 4745 scope.go:117] "RemoveContainer" containerID="46a3367bccc82d27412ecbe32a257c46dd11185c37978f0c544e344a96b28872" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.505337 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" event={"ID":"9d575213-22d9-4fbc-ac57-72a22b46c5fe","Type":"ContainerDied","Data":"9c8f23925437ef6acccc6ab56fdd50958926f4a913145fab2a7889f8ca01477c"} Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.505380 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-764dd48c6c-4rmcv" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.525933 4745 scope.go:117] "RemoveContainer" containerID="7b398e59d5630be0b955242b5d7d5821bb42cb2894dc0dd19769b567ea3e0684" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.531038 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr"] Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.535321 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f4f89974-kmcjr"] Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.543158 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-764dd48c6c-4rmcv"] Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.546300 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-764dd48c6c-4rmcv"] Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.872597 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl"] Jan 27 12:18:57 crc kubenswrapper[4745]: E0127 12:18:57.873917 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36154dea-ca68-4ca6-8e2f-83a669152ca7" containerName="registry-server" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.874250 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="36154dea-ca68-4ca6-8e2f-83a669152ca7" containerName="registry-server" Jan 27 12:18:57 crc kubenswrapper[4745]: E0127 12:18:57.874383 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef501b00-038c-4e04-a308-ca40d5a5effd" containerName="route-controller-manager" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.874520 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef501b00-038c-4e04-a308-ca40d5a5effd" containerName="route-controller-manager" Jan 27 12:18:57 crc kubenswrapper[4745]: E0127 12:18:57.875508 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fcec544-9ef8-406d-9f01-b3ceabf2b033" containerName="extract-utilities" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.875605 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fcec544-9ef8-406d-9f01-b3ceabf2b033" containerName="extract-utilities" Jan 27 12:18:57 crc kubenswrapper[4745]: E0127 12:18:57.875682 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c43381-42e2-4e01-9559-70c3c56070ea" containerName="extract-content" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.875770 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c43381-42e2-4e01-9559-70c3c56070ea" containerName="extract-content" Jan 27 12:18:57 crc kubenswrapper[4745]: E0127 12:18:57.875904 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2b41701-5113-4970-8d93-157bf16b3c06" containerName="extract-content" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.876037 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2b41701-5113-4970-8d93-157bf16b3c06" containerName="extract-content" Jan 27 12:18:57 crc kubenswrapper[4745]: E0127 12:18:57.876139 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36154dea-ca68-4ca6-8e2f-83a669152ca7" containerName="extract-utilities" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.876259 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="36154dea-ca68-4ca6-8e2f-83a669152ca7" containerName="extract-utilities" Jan 27 12:18:57 crc kubenswrapper[4745]: E0127 12:18:57.876369 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2b41701-5113-4970-8d93-157bf16b3c06" containerName="registry-server" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.876452 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2b41701-5113-4970-8d93-157bf16b3c06" containerName="registry-server" Jan 27 12:18:57 crc kubenswrapper[4745]: E0127 12:18:57.876556 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fcec544-9ef8-406d-9f01-b3ceabf2b033" containerName="registry-server" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.876637 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fcec544-9ef8-406d-9f01-b3ceabf2b033" containerName="registry-server" Jan 27 12:18:57 crc kubenswrapper[4745]: E0127 12:18:57.876724 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fcec544-9ef8-406d-9f01-b3ceabf2b033" containerName="extract-content" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.876827 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fcec544-9ef8-406d-9f01-b3ceabf2b033" containerName="extract-content" Jan 27 12:18:57 crc kubenswrapper[4745]: E0127 12:18:57.876910 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c43381-42e2-4e01-9559-70c3c56070ea" containerName="extract-utilities" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.877003 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c43381-42e2-4e01-9559-70c3c56070ea" containerName="extract-utilities" Jan 27 12:18:57 crc kubenswrapper[4745]: E0127 12:18:57.877092 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2b41701-5113-4970-8d93-157bf16b3c06" containerName="extract-utilities" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.877173 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2b41701-5113-4970-8d93-157bf16b3c06" containerName="extract-utilities" Jan 27 12:18:57 crc kubenswrapper[4745]: E0127 12:18:57.877261 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d575213-22d9-4fbc-ac57-72a22b46c5fe" containerName="controller-manager" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.877353 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d575213-22d9-4fbc-ac57-72a22b46c5fe" containerName="controller-manager" Jan 27 12:18:57 crc kubenswrapper[4745]: E0127 12:18:57.877443 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c43381-42e2-4e01-9559-70c3c56070ea" containerName="registry-server" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.877522 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c43381-42e2-4e01-9559-70c3c56070ea" containerName="registry-server" Jan 27 12:18:57 crc kubenswrapper[4745]: E0127 12:18:57.877600 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36154dea-ca68-4ca6-8e2f-83a669152ca7" containerName="extract-content" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.877678 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="36154dea-ca68-4ca6-8e2f-83a669152ca7" containerName="extract-content" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.877960 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="64c43381-42e2-4e01-9559-70c3c56070ea" containerName="registry-server" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.878045 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2b41701-5113-4970-8d93-157bf16b3c06" containerName="registry-server" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.878133 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="36154dea-ca68-4ca6-8e2f-83a669152ca7" containerName="registry-server" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.878214 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fcec544-9ef8-406d-9f01-b3ceabf2b033" containerName="registry-server" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.878287 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d575213-22d9-4fbc-ac57-72a22b46c5fe" containerName="controller-manager" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.878369 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef501b00-038c-4e04-a308-ca40d5a5effd" containerName="route-controller-manager" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.878863 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.883862 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55dbc78746-n86tp"] Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.884434 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.884477 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.884523 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.884812 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.885035 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.885100 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.885193 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.888268 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.888450 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.888728 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.889186 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.889279 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl"] Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.889593 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.889805 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.894632 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55dbc78746-n86tp"] Jan 27 12:18:57 crc kubenswrapper[4745]: I0127 12:18:57.896342 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.067702 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d3783af-3e8b-42d2-a19b-ea987c06f517-client-ca\") pod \"controller-manager-55dbc78746-n86tp\" (UID: \"4d3783af-3e8b-42d2-a19b-ea987c06f517\") " pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.067765 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d3783af-3e8b-42d2-a19b-ea987c06f517-serving-cert\") pod \"controller-manager-55dbc78746-n86tp\" (UID: \"4d3783af-3e8b-42d2-a19b-ea987c06f517\") " pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.067799 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/665d3da4-2833-4579-b9dc-d810b284f325-client-ca\") pod \"route-controller-manager-65b84b8496-rtrfl\" (UID: \"665d3da4-2833-4579-b9dc-d810b284f325\") " pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.067851 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/665d3da4-2833-4579-b9dc-d810b284f325-config\") pod \"route-controller-manager-65b84b8496-rtrfl\" (UID: \"665d3da4-2833-4579-b9dc-d810b284f325\") " pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.067915 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcrs9\" (UniqueName: \"kubernetes.io/projected/665d3da4-2833-4579-b9dc-d810b284f325-kube-api-access-lcrs9\") pod \"route-controller-manager-65b84b8496-rtrfl\" (UID: \"665d3da4-2833-4579-b9dc-d810b284f325\") " pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.067946 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d3783af-3e8b-42d2-a19b-ea987c06f517-proxy-ca-bundles\") pod \"controller-manager-55dbc78746-n86tp\" (UID: \"4d3783af-3e8b-42d2-a19b-ea987c06f517\") " pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.067974 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d3783af-3e8b-42d2-a19b-ea987c06f517-config\") pod \"controller-manager-55dbc78746-n86tp\" (UID: \"4d3783af-3e8b-42d2-a19b-ea987c06f517\") " pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.067996 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/665d3da4-2833-4579-b9dc-d810b284f325-serving-cert\") pod \"route-controller-manager-65b84b8496-rtrfl\" (UID: \"665d3da4-2833-4579-b9dc-d810b284f325\") " pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.068013 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls76w\" (UniqueName: \"kubernetes.io/projected/4d3783af-3e8b-42d2-a19b-ea987c06f517-kube-api-access-ls76w\") pod \"controller-manager-55dbc78746-n86tp\" (UID: \"4d3783af-3e8b-42d2-a19b-ea987c06f517\") " pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.078934 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d575213-22d9-4fbc-ac57-72a22b46c5fe" path="/var/lib/kubelet/pods/9d575213-22d9-4fbc-ac57-72a22b46c5fe/volumes" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.079432 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef501b00-038c-4e04-a308-ca40d5a5effd" path="/var/lib/kubelet/pods/ef501b00-038c-4e04-a308-ca40d5a5effd/volumes" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.169534 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d3783af-3e8b-42d2-a19b-ea987c06f517-serving-cert\") pod \"controller-manager-55dbc78746-n86tp\" (UID: \"4d3783af-3e8b-42d2-a19b-ea987c06f517\") " pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.169601 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/665d3da4-2833-4579-b9dc-d810b284f325-client-ca\") pod \"route-controller-manager-65b84b8496-rtrfl\" (UID: \"665d3da4-2833-4579-b9dc-d810b284f325\") " pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.169641 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/665d3da4-2833-4579-b9dc-d810b284f325-config\") pod \"route-controller-manager-65b84b8496-rtrfl\" (UID: \"665d3da4-2833-4579-b9dc-d810b284f325\") " pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.169717 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcrs9\" (UniqueName: \"kubernetes.io/projected/665d3da4-2833-4579-b9dc-d810b284f325-kube-api-access-lcrs9\") pod \"route-controller-manager-65b84b8496-rtrfl\" (UID: \"665d3da4-2833-4579-b9dc-d810b284f325\") " pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.169744 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d3783af-3e8b-42d2-a19b-ea987c06f517-proxy-ca-bundles\") pod \"controller-manager-55dbc78746-n86tp\" (UID: \"4d3783af-3e8b-42d2-a19b-ea987c06f517\") " pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.169803 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d3783af-3e8b-42d2-a19b-ea987c06f517-config\") pod \"controller-manager-55dbc78746-n86tp\" (UID: \"4d3783af-3e8b-42d2-a19b-ea987c06f517\") " pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.169858 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/665d3da4-2833-4579-b9dc-d810b284f325-serving-cert\") pod \"route-controller-manager-65b84b8496-rtrfl\" (UID: \"665d3da4-2833-4579-b9dc-d810b284f325\") " pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.169886 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls76w\" (UniqueName: \"kubernetes.io/projected/4d3783af-3e8b-42d2-a19b-ea987c06f517-kube-api-access-ls76w\") pod \"controller-manager-55dbc78746-n86tp\" (UID: \"4d3783af-3e8b-42d2-a19b-ea987c06f517\") " pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.169933 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d3783af-3e8b-42d2-a19b-ea987c06f517-client-ca\") pod \"controller-manager-55dbc78746-n86tp\" (UID: \"4d3783af-3e8b-42d2-a19b-ea987c06f517\") " pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.171840 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d3783af-3e8b-42d2-a19b-ea987c06f517-config\") pod \"controller-manager-55dbc78746-n86tp\" (UID: \"4d3783af-3e8b-42d2-a19b-ea987c06f517\") " pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.172461 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/665d3da4-2833-4579-b9dc-d810b284f325-client-ca\") pod \"route-controller-manager-65b84b8496-rtrfl\" (UID: \"665d3da4-2833-4579-b9dc-d810b284f325\") " pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.172785 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d3783af-3e8b-42d2-a19b-ea987c06f517-client-ca\") pod \"controller-manager-55dbc78746-n86tp\" (UID: \"4d3783af-3e8b-42d2-a19b-ea987c06f517\") " pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.174972 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/665d3da4-2833-4579-b9dc-d810b284f325-config\") pod \"route-controller-manager-65b84b8496-rtrfl\" (UID: \"665d3da4-2833-4579-b9dc-d810b284f325\") " pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.175872 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d3783af-3e8b-42d2-a19b-ea987c06f517-proxy-ca-bundles\") pod \"controller-manager-55dbc78746-n86tp\" (UID: \"4d3783af-3e8b-42d2-a19b-ea987c06f517\") " pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.179330 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d3783af-3e8b-42d2-a19b-ea987c06f517-serving-cert\") pod \"controller-manager-55dbc78746-n86tp\" (UID: \"4d3783af-3e8b-42d2-a19b-ea987c06f517\") " pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.183574 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/665d3da4-2833-4579-b9dc-d810b284f325-serving-cert\") pod \"route-controller-manager-65b84b8496-rtrfl\" (UID: \"665d3da4-2833-4579-b9dc-d810b284f325\") " pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.195360 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcrs9\" (UniqueName: \"kubernetes.io/projected/665d3da4-2833-4579-b9dc-d810b284f325-kube-api-access-lcrs9\") pod \"route-controller-manager-65b84b8496-rtrfl\" (UID: \"665d3da4-2833-4579-b9dc-d810b284f325\") " pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.196946 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls76w\" (UniqueName: \"kubernetes.io/projected/4d3783af-3e8b-42d2-a19b-ea987c06f517-kube-api-access-ls76w\") pod \"controller-manager-55dbc78746-n86tp\" (UID: \"4d3783af-3e8b-42d2-a19b-ea987c06f517\") " pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.211458 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.218999 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.618575 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl"] Jan 27 12:18:58 crc kubenswrapper[4745]: I0127 12:18:58.661653 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55dbc78746-n86tp"] Jan 27 12:18:58 crc kubenswrapper[4745]: W0127 12:18:58.665669 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d3783af_3e8b_42d2_a19b_ea987c06f517.slice/crio-e087b5c778d67a2a66ae60a0e305513f14116ad8d77701f68d8ea35c6e1944f8 WatchSource:0}: Error finding container e087b5c778d67a2a66ae60a0e305513f14116ad8d77701f68d8ea35c6e1944f8: Status 404 returned error can't find the container with id e087b5c778d67a2a66ae60a0e305513f14116ad8d77701f68d8ea35c6e1944f8 Jan 27 12:18:59 crc kubenswrapper[4745]: I0127 12:18:59.518580 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" event={"ID":"4d3783af-3e8b-42d2-a19b-ea987c06f517","Type":"ContainerStarted","Data":"761727a834184a709274b1f2154e8039c70f6bbecfe74829139acd7e567c54e1"} Jan 27 12:18:59 crc kubenswrapper[4745]: I0127 12:18:59.518619 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" event={"ID":"4d3783af-3e8b-42d2-a19b-ea987c06f517","Type":"ContainerStarted","Data":"e087b5c778d67a2a66ae60a0e305513f14116ad8d77701f68d8ea35c6e1944f8"} Jan 27 12:18:59 crc kubenswrapper[4745]: I0127 12:18:59.518793 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" Jan 27 12:18:59 crc kubenswrapper[4745]: I0127 12:18:59.520536 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" event={"ID":"665d3da4-2833-4579-b9dc-d810b284f325","Type":"ContainerStarted","Data":"164fdbbb60ce2c58843b3a7e8bf1e2ceff091af100f137a727d23f4ffd815740"} Jan 27 12:18:59 crc kubenswrapper[4745]: I0127 12:18:59.520563 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" event={"ID":"665d3da4-2833-4579-b9dc-d810b284f325","Type":"ContainerStarted","Data":"09aa71427773a227cdffa1ad80ddf9d1962ecb26e742b8021309f277f551c527"} Jan 27 12:18:59 crc kubenswrapper[4745]: I0127 12:18:59.520777 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" Jan 27 12:18:59 crc kubenswrapper[4745]: I0127 12:18:59.522655 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" Jan 27 12:18:59 crc kubenswrapper[4745]: I0127 12:18:59.537507 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" podStartSLOduration=3.537478735 podStartE2EDuration="3.537478735s" podCreationTimestamp="2026-01-27 12:18:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:18:59.536240709 +0000 UTC m=+432.341151407" watchObservedRunningTime="2026-01-27 12:18:59.537478735 +0000 UTC m=+432.342389423" Jan 27 12:18:59 crc kubenswrapper[4745]: I0127 12:18:59.572383 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" podStartSLOduration=3.572362942 podStartE2EDuration="3.572362942s" podCreationTimestamp="2026-01-27 12:18:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:18:59.569360275 +0000 UTC m=+432.374270963" watchObservedRunningTime="2026-01-27 12:18:59.572362942 +0000 UTC m=+432.377273630" Jan 27 12:19:00 crc kubenswrapper[4745]: I0127 12:19:00.025104 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" Jan 27 12:19:05 crc kubenswrapper[4745]: I0127 12:19:05.967652 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:19:05 crc kubenswrapper[4745]: I0127 12:19:05.968122 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:19:16 crc kubenswrapper[4745]: I0127 12:19:16.211253 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55dbc78746-n86tp"] Jan 27 12:19:16 crc kubenswrapper[4745]: I0127 12:19:16.212028 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" podUID="4d3783af-3e8b-42d2-a19b-ea987c06f517" containerName="controller-manager" containerID="cri-o://761727a834184a709274b1f2154e8039c70f6bbecfe74829139acd7e567c54e1" gracePeriod=30 Jan 27 12:19:16 crc kubenswrapper[4745]: I0127 12:19:16.310800 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl"] Jan 27 12:19:16 crc kubenswrapper[4745]: I0127 12:19:16.311510 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" podUID="665d3da4-2833-4579-b9dc-d810b284f325" containerName="route-controller-manager" containerID="cri-o://164fdbbb60ce2c58843b3a7e8bf1e2ceff091af100f137a727d23f4ffd815740" gracePeriod=30 Jan 27 12:19:16 crc kubenswrapper[4745]: I0127 12:19:16.624335 4745 generic.go:334] "Generic (PLEG): container finished" podID="4d3783af-3e8b-42d2-a19b-ea987c06f517" containerID="761727a834184a709274b1f2154e8039c70f6bbecfe74829139acd7e567c54e1" exitCode=0 Jan 27 12:19:16 crc kubenswrapper[4745]: I0127 12:19:16.624412 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" event={"ID":"4d3783af-3e8b-42d2-a19b-ea987c06f517","Type":"ContainerDied","Data":"761727a834184a709274b1f2154e8039c70f6bbecfe74829139acd7e567c54e1"} Jan 27 12:19:16 crc kubenswrapper[4745]: I0127 12:19:16.626879 4745 generic.go:334] "Generic (PLEG): container finished" podID="665d3da4-2833-4579-b9dc-d810b284f325" containerID="164fdbbb60ce2c58843b3a7e8bf1e2ceff091af100f137a727d23f4ffd815740" exitCode=0 Jan 27 12:19:16 crc kubenswrapper[4745]: I0127 12:19:16.626913 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" event={"ID":"665d3da4-2833-4579-b9dc-d810b284f325","Type":"ContainerDied","Data":"164fdbbb60ce2c58843b3a7e8bf1e2ceff091af100f137a727d23f4ffd815740"} Jan 27 12:19:16 crc kubenswrapper[4745]: I0127 12:19:16.816515 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" Jan 27 12:19:16 crc kubenswrapper[4745]: I0127 12:19:16.917017 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/665d3da4-2833-4579-b9dc-d810b284f325-client-ca\") pod \"665d3da4-2833-4579-b9dc-d810b284f325\" (UID: \"665d3da4-2833-4579-b9dc-d810b284f325\") " Jan 27 12:19:16 crc kubenswrapper[4745]: I0127 12:19:16.917074 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/665d3da4-2833-4579-b9dc-d810b284f325-config\") pod \"665d3da4-2833-4579-b9dc-d810b284f325\" (UID: \"665d3da4-2833-4579-b9dc-d810b284f325\") " Jan 27 12:19:16 crc kubenswrapper[4745]: I0127 12:19:16.917290 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/665d3da4-2833-4579-b9dc-d810b284f325-serving-cert\") pod \"665d3da4-2833-4579-b9dc-d810b284f325\" (UID: \"665d3da4-2833-4579-b9dc-d810b284f325\") " Jan 27 12:19:16 crc kubenswrapper[4745]: I0127 12:19:16.917344 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcrs9\" (UniqueName: \"kubernetes.io/projected/665d3da4-2833-4579-b9dc-d810b284f325-kube-api-access-lcrs9\") pod \"665d3da4-2833-4579-b9dc-d810b284f325\" (UID: \"665d3da4-2833-4579-b9dc-d810b284f325\") " Jan 27 12:19:16 crc kubenswrapper[4745]: I0127 12:19:16.917952 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/665d3da4-2833-4579-b9dc-d810b284f325-client-ca" (OuterVolumeSpecName: "client-ca") pod "665d3da4-2833-4579-b9dc-d810b284f325" (UID: "665d3da4-2833-4579-b9dc-d810b284f325"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:19:16 crc kubenswrapper[4745]: I0127 12:19:16.917991 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/665d3da4-2833-4579-b9dc-d810b284f325-config" (OuterVolumeSpecName: "config") pod "665d3da4-2833-4579-b9dc-d810b284f325" (UID: "665d3da4-2833-4579-b9dc-d810b284f325"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:19:16 crc kubenswrapper[4745]: I0127 12:19:16.922522 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/665d3da4-2833-4579-b9dc-d810b284f325-kube-api-access-lcrs9" (OuterVolumeSpecName: "kube-api-access-lcrs9") pod "665d3da4-2833-4579-b9dc-d810b284f325" (UID: "665d3da4-2833-4579-b9dc-d810b284f325"). InnerVolumeSpecName "kube-api-access-lcrs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:19:16 crc kubenswrapper[4745]: I0127 12:19:16.927125 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/665d3da4-2833-4579-b9dc-d810b284f325-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "665d3da4-2833-4579-b9dc-d810b284f325" (UID: "665d3da4-2833-4579-b9dc-d810b284f325"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.036048 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/665d3da4-2833-4579-b9dc-d810b284f325-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.036095 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/665d3da4-2833-4579-b9dc-d810b284f325-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.036104 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/665d3da4-2833-4579-b9dc-d810b284f325-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.036113 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcrs9\" (UniqueName: \"kubernetes.io/projected/665d3da4-2833-4579-b9dc-d810b284f325-kube-api-access-lcrs9\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.438474 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.543094 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d3783af-3e8b-42d2-a19b-ea987c06f517-config\") pod \"4d3783af-3e8b-42d2-a19b-ea987c06f517\" (UID: \"4d3783af-3e8b-42d2-a19b-ea987c06f517\") " Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.543166 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d3783af-3e8b-42d2-a19b-ea987c06f517-serving-cert\") pod \"4d3783af-3e8b-42d2-a19b-ea987c06f517\" (UID: \"4d3783af-3e8b-42d2-a19b-ea987c06f517\") " Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.543231 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d3783af-3e8b-42d2-a19b-ea987c06f517-client-ca\") pod \"4d3783af-3e8b-42d2-a19b-ea987c06f517\" (UID: \"4d3783af-3e8b-42d2-a19b-ea987c06f517\") " Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.543257 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls76w\" (UniqueName: \"kubernetes.io/projected/4d3783af-3e8b-42d2-a19b-ea987c06f517-kube-api-access-ls76w\") pod \"4d3783af-3e8b-42d2-a19b-ea987c06f517\" (UID: \"4d3783af-3e8b-42d2-a19b-ea987c06f517\") " Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.543285 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d3783af-3e8b-42d2-a19b-ea987c06f517-proxy-ca-bundles\") pod \"4d3783af-3e8b-42d2-a19b-ea987c06f517\" (UID: \"4d3783af-3e8b-42d2-a19b-ea987c06f517\") " Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.543946 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d3783af-3e8b-42d2-a19b-ea987c06f517-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4d3783af-3e8b-42d2-a19b-ea987c06f517" (UID: "4d3783af-3e8b-42d2-a19b-ea987c06f517"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.543992 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d3783af-3e8b-42d2-a19b-ea987c06f517-config" (OuterVolumeSpecName: "config") pod "4d3783af-3e8b-42d2-a19b-ea987c06f517" (UID: "4d3783af-3e8b-42d2-a19b-ea987c06f517"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.544297 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d3783af-3e8b-42d2-a19b-ea987c06f517-client-ca" (OuterVolumeSpecName: "client-ca") pod "4d3783af-3e8b-42d2-a19b-ea987c06f517" (UID: "4d3783af-3e8b-42d2-a19b-ea987c06f517"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.546782 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d3783af-3e8b-42d2-a19b-ea987c06f517-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4d3783af-3e8b-42d2-a19b-ea987c06f517" (UID: "4d3783af-3e8b-42d2-a19b-ea987c06f517"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.547520 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d3783af-3e8b-42d2-a19b-ea987c06f517-kube-api-access-ls76w" (OuterVolumeSpecName: "kube-api-access-ls76w") pod "4d3783af-3e8b-42d2-a19b-ea987c06f517" (UID: "4d3783af-3e8b-42d2-a19b-ea987c06f517"). InnerVolumeSpecName "kube-api-access-ls76w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.637731 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" event={"ID":"4d3783af-3e8b-42d2-a19b-ea987c06f517","Type":"ContainerDied","Data":"e087b5c778d67a2a66ae60a0e305513f14116ad8d77701f68d8ea35c6e1944f8"} Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.637764 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55dbc78746-n86tp" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.637787 4745 scope.go:117] "RemoveContainer" containerID="761727a834184a709274b1f2154e8039c70f6bbecfe74829139acd7e567c54e1" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.639343 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" event={"ID":"665d3da4-2833-4579-b9dc-d810b284f325","Type":"ContainerDied","Data":"09aa71427773a227cdffa1ad80ddf9d1962ecb26e742b8021309f277f551c527"} Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.639420 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.645082 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d3783af-3e8b-42d2-a19b-ea987c06f517-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.645124 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls76w\" (UniqueName: \"kubernetes.io/projected/4d3783af-3e8b-42d2-a19b-ea987c06f517-kube-api-access-ls76w\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.645140 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d3783af-3e8b-42d2-a19b-ea987c06f517-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.645154 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d3783af-3e8b-42d2-a19b-ea987c06f517-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.645166 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d3783af-3e8b-42d2-a19b-ea987c06f517-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.657462 4745 scope.go:117] "RemoveContainer" containerID="164fdbbb60ce2c58843b3a7e8bf1e2ceff091af100f137a727d23f4ffd815740" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.677968 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55dbc78746-n86tp"] Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.681502 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-55dbc78746-n86tp"] Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.691990 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl"] Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.694977 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b84b8496-rtrfl"] Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.890907 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d89fb9756-tv87j"] Jan 27 12:19:17 crc kubenswrapper[4745]: E0127 12:19:17.891205 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d3783af-3e8b-42d2-a19b-ea987c06f517" containerName="controller-manager" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.891219 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d3783af-3e8b-42d2-a19b-ea987c06f517" containerName="controller-manager" Jan 27 12:19:17 crc kubenswrapper[4745]: E0127 12:19:17.891236 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="665d3da4-2833-4579-b9dc-d810b284f325" containerName="route-controller-manager" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.891244 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="665d3da4-2833-4579-b9dc-d810b284f325" containerName="route-controller-manager" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.891358 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d3783af-3e8b-42d2-a19b-ea987c06f517" containerName="controller-manager" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.891375 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="665d3da4-2833-4579-b9dc-d810b284f325" containerName="route-controller-manager" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.891883 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.894714 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.896327 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw"] Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.897265 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.898965 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw"] Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.900491 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.901008 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.901077 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.901611 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.902109 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.903187 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.903565 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d89fb9756-tv87j"] Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.905976 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.906370 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.906460 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.906994 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.907332 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.910601 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.949440 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee5aa348-6635-441e-99aa-0c4776381dd1-serving-cert\") pod \"route-controller-manager-75b76566b9-kv5qw\" (UID: \"ee5aa348-6635-441e-99aa-0c4776381dd1\") " pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.949499 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-client-ca\") pod \"controller-manager-7d89fb9756-tv87j\" (UID: \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\") " pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.949533 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee5aa348-6635-441e-99aa-0c4776381dd1-config\") pod \"route-controller-manager-75b76566b9-kv5qw\" (UID: \"ee5aa348-6635-441e-99aa-0c4776381dd1\") " pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.949561 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-config\") pod \"controller-manager-7d89fb9756-tv87j\" (UID: \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\") " pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.949694 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-proxy-ca-bundles\") pod \"controller-manager-7d89fb9756-tv87j\" (UID: \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\") " pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.949886 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whffb\" (UniqueName: \"kubernetes.io/projected/ee5aa348-6635-441e-99aa-0c4776381dd1-kube-api-access-whffb\") pod \"route-controller-manager-75b76566b9-kv5qw\" (UID: \"ee5aa348-6635-441e-99aa-0c4776381dd1\") " pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.949989 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8txr4\" (UniqueName: \"kubernetes.io/projected/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-kube-api-access-8txr4\") pod \"controller-manager-7d89fb9756-tv87j\" (UID: \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\") " pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.950114 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee5aa348-6635-441e-99aa-0c4776381dd1-client-ca\") pod \"route-controller-manager-75b76566b9-kv5qw\" (UID: \"ee5aa348-6635-441e-99aa-0c4776381dd1\") " pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" Jan 27 12:19:17 crc kubenswrapper[4745]: I0127 12:19:17.950149 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-serving-cert\") pod \"controller-manager-7d89fb9756-tv87j\" (UID: \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\") " pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.053696 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whffb\" (UniqueName: \"kubernetes.io/projected/ee5aa348-6635-441e-99aa-0c4776381dd1-kube-api-access-whffb\") pod \"route-controller-manager-75b76566b9-kv5qw\" (UID: \"ee5aa348-6635-441e-99aa-0c4776381dd1\") " pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.053837 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8txr4\" (UniqueName: \"kubernetes.io/projected/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-kube-api-access-8txr4\") pod \"controller-manager-7d89fb9756-tv87j\" (UID: \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\") " pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.053897 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee5aa348-6635-441e-99aa-0c4776381dd1-client-ca\") pod \"route-controller-manager-75b76566b9-kv5qw\" (UID: \"ee5aa348-6635-441e-99aa-0c4776381dd1\") " pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.053945 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-serving-cert\") pod \"controller-manager-7d89fb9756-tv87j\" (UID: \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\") " pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.053974 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee5aa348-6635-441e-99aa-0c4776381dd1-serving-cert\") pod \"route-controller-manager-75b76566b9-kv5qw\" (UID: \"ee5aa348-6635-441e-99aa-0c4776381dd1\") " pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.054007 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-client-ca\") pod \"controller-manager-7d89fb9756-tv87j\" (UID: \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\") " pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.054043 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee5aa348-6635-441e-99aa-0c4776381dd1-config\") pod \"route-controller-manager-75b76566b9-kv5qw\" (UID: \"ee5aa348-6635-441e-99aa-0c4776381dd1\") " pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.054076 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-config\") pod \"controller-manager-7d89fb9756-tv87j\" (UID: \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\") " pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.054109 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-proxy-ca-bundles\") pod \"controller-manager-7d89fb9756-tv87j\" (UID: \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\") " pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.056014 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee5aa348-6635-441e-99aa-0c4776381dd1-client-ca\") pod \"route-controller-manager-75b76566b9-kv5qw\" (UID: \"ee5aa348-6635-441e-99aa-0c4776381dd1\") " pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.056066 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee5aa348-6635-441e-99aa-0c4776381dd1-config\") pod \"route-controller-manager-75b76566b9-kv5qw\" (UID: \"ee5aa348-6635-441e-99aa-0c4776381dd1\") " pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.056021 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-client-ca\") pod \"controller-manager-7d89fb9756-tv87j\" (UID: \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\") " pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.057419 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-proxy-ca-bundles\") pod \"controller-manager-7d89fb9756-tv87j\" (UID: \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\") " pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.058322 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-config\") pod \"controller-manager-7d89fb9756-tv87j\" (UID: \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\") " pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.059550 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee5aa348-6635-441e-99aa-0c4776381dd1-serving-cert\") pod \"route-controller-manager-75b76566b9-kv5qw\" (UID: \"ee5aa348-6635-441e-99aa-0c4776381dd1\") " pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.059596 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-serving-cert\") pod \"controller-manager-7d89fb9756-tv87j\" (UID: \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\") " pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.077440 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whffb\" (UniqueName: \"kubernetes.io/projected/ee5aa348-6635-441e-99aa-0c4776381dd1-kube-api-access-whffb\") pod \"route-controller-manager-75b76566b9-kv5qw\" (UID: \"ee5aa348-6635-441e-99aa-0c4776381dd1\") " pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.079389 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8txr4\" (UniqueName: \"kubernetes.io/projected/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-kube-api-access-8txr4\") pod \"controller-manager-7d89fb9756-tv87j\" (UID: \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\") " pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.084090 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d3783af-3e8b-42d2-a19b-ea987c06f517" path="/var/lib/kubelet/pods/4d3783af-3e8b-42d2-a19b-ea987c06f517/volumes" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.084640 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="665d3da4-2833-4579-b9dc-d810b284f325" path="/var/lib/kubelet/pods/665d3da4-2833-4579-b9dc-d810b284f325/volumes" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.210948 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.228059 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.620277 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d89fb9756-tv87j"] Jan 27 12:19:18 crc kubenswrapper[4745]: W0127 12:19:18.641184 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod710e65f7_7d7e_4c16_bd4a_f4f6fa02b154.slice/crio-8f4045a76e0da0007b79820a172c750e4ed0c5ac463008168ff440933fbad0a6 WatchSource:0}: Error finding container 8f4045a76e0da0007b79820a172c750e4ed0c5ac463008168ff440933fbad0a6: Status 404 returned error can't find the container with id 8f4045a76e0da0007b79820a172c750e4ed0c5ac463008168ff440933fbad0a6 Jan 27 12:19:18 crc kubenswrapper[4745]: I0127 12:19:18.681282 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw"] Jan 27 12:19:18 crc kubenswrapper[4745]: W0127 12:19:18.685743 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee5aa348_6635_441e_99aa_0c4776381dd1.slice/crio-be261dd94dd9119f8a98585393da269b3b59cd42d85a2ce41902d3add44d5f93 WatchSource:0}: Error finding container be261dd94dd9119f8a98585393da269b3b59cd42d85a2ce41902d3add44d5f93: Status 404 returned error can't find the container with id be261dd94dd9119f8a98585393da269b3b59cd42d85a2ce41902d3add44d5f93 Jan 27 12:19:19 crc kubenswrapper[4745]: I0127 12:19:19.661562 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" event={"ID":"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154","Type":"ContainerStarted","Data":"f93e8df13a1cf48610ecea55df0c4a1a28d5b29fa726e08891ac7996637d989d"} Jan 27 12:19:19 crc kubenswrapper[4745]: I0127 12:19:19.662185 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" event={"ID":"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154","Type":"ContainerStarted","Data":"8f4045a76e0da0007b79820a172c750e4ed0c5ac463008168ff440933fbad0a6"} Jan 27 12:19:19 crc kubenswrapper[4745]: I0127 12:19:19.662237 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" Jan 27 12:19:19 crc kubenswrapper[4745]: I0127 12:19:19.664880 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" event={"ID":"ee5aa348-6635-441e-99aa-0c4776381dd1","Type":"ContainerStarted","Data":"631a85af1321da1940966aaebf694339643f7c6c0ff5d6459e0f3034aa3a0244"} Jan 27 12:19:19 crc kubenswrapper[4745]: I0127 12:19:19.664943 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" event={"ID":"ee5aa348-6635-441e-99aa-0c4776381dd1","Type":"ContainerStarted","Data":"be261dd94dd9119f8a98585393da269b3b59cd42d85a2ce41902d3add44d5f93"} Jan 27 12:19:19 crc kubenswrapper[4745]: I0127 12:19:19.665192 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" Jan 27 12:19:19 crc kubenswrapper[4745]: I0127 12:19:19.667623 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" Jan 27 12:19:19 crc kubenswrapper[4745]: I0127 12:19:19.692326 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" Jan 27 12:19:19 crc kubenswrapper[4745]: I0127 12:19:19.695644 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" podStartSLOduration=3.695602926 podStartE2EDuration="3.695602926s" podCreationTimestamp="2026-01-27 12:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:19:19.689927542 +0000 UTC m=+452.494838230" watchObservedRunningTime="2026-01-27 12:19:19.695602926 +0000 UTC m=+452.500513614" Jan 27 12:19:19 crc kubenswrapper[4745]: I0127 12:19:19.721385 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" podStartSLOduration=3.72136374 podStartE2EDuration="3.72136374s" podCreationTimestamp="2026-01-27 12:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:19:19.720170125 +0000 UTC m=+452.525080813" watchObservedRunningTime="2026-01-27 12:19:19.72136374 +0000 UTC m=+452.526274428" Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.514187 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bmx2n"] Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.515033 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bmx2n" podUID="7c6f4dda-1294-4903-a4c1-6685307c3b25" containerName="registry-server" containerID="cri-o://bd5491f7f0e459da8a3dbed2aac2a653309e10eea252f2f2ba2907a23f2c904e" gracePeriod=30 Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.521318 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tzw6b"] Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.521860 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tzw6b" podUID="341a8942-834f-4f76-8269-7ecdecaaa1b0" containerName="registry-server" containerID="cri-o://b0916971a50047bc3ecf82a5e73970103b735a529c0ef23324cbb90cbed42099" gracePeriod=30 Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.528076 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfk2x"] Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.528297 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" podUID="5cd78bf5-69f3-4074-9dea-c7a459de6d4d" containerName="marketplace-operator" containerID="cri-o://a887876be0a0983d29839bc5e0ebb9444857efcbdcc08aa3c88f84695524ef4d" gracePeriod=30 Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.534425 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hw272"] Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.534718 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hw272" podUID="6d114857-b077-4798-b578-b9a15645d31f" containerName="registry-server" containerID="cri-o://68db3ea37b482395445dd0ac418e32ce5fddec7cf59150d5fa43cfaa1ce4b73b" gracePeriod=30 Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.539470 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fdwrb"] Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.540235 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fdwrb" Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.546395 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9tkgm"] Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.546671 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9tkgm" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" containerName="registry-server" containerID="cri-o://d3a160686ccda655c81e74bd9d33a37463d91ee5e8ab70e9a12d11197101634e" gracePeriod=30 Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.557468 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fdwrb"] Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.696166 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxrmm\" (UniqueName: \"kubernetes.io/projected/b077012b-6cdc-4a9a-85ec-4d9f0f59dce1-kube-api-access-lxrmm\") pod \"marketplace-operator-79b997595-fdwrb\" (UID: \"b077012b-6cdc-4a9a-85ec-4d9f0f59dce1\") " pod="openshift-marketplace/marketplace-operator-79b997595-fdwrb" Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.696227 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b077012b-6cdc-4a9a-85ec-4d9f0f59dce1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fdwrb\" (UID: \"b077012b-6cdc-4a9a-85ec-4d9f0f59dce1\") " pod="openshift-marketplace/marketplace-operator-79b997595-fdwrb" Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.696254 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b077012b-6cdc-4a9a-85ec-4d9f0f59dce1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fdwrb\" (UID: \"b077012b-6cdc-4a9a-85ec-4d9f0f59dce1\") " pod="openshift-marketplace/marketplace-operator-79b997595-fdwrb" Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.773024 4745 generic.go:334] "Generic (PLEG): container finished" podID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" containerID="d3a160686ccda655c81e74bd9d33a37463d91ee5e8ab70e9a12d11197101634e" exitCode=0 Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.773088 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tkgm" event={"ID":"7ff89667-3b76-4571-a07b-d43bce0a2e5b","Type":"ContainerDied","Data":"d3a160686ccda655c81e74bd9d33a37463d91ee5e8ab70e9a12d11197101634e"} Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.776327 4745 generic.go:334] "Generic (PLEG): container finished" podID="5cd78bf5-69f3-4074-9dea-c7a459de6d4d" containerID="a887876be0a0983d29839bc5e0ebb9444857efcbdcc08aa3c88f84695524ef4d" exitCode=0 Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.776376 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" event={"ID":"5cd78bf5-69f3-4074-9dea-c7a459de6d4d","Type":"ContainerDied","Data":"a887876be0a0983d29839bc5e0ebb9444857efcbdcc08aa3c88f84695524ef4d"} Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.781795 4745 generic.go:334] "Generic (PLEG): container finished" podID="7c6f4dda-1294-4903-a4c1-6685307c3b25" containerID="bd5491f7f0e459da8a3dbed2aac2a653309e10eea252f2f2ba2907a23f2c904e" exitCode=0 Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.781881 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmx2n" event={"ID":"7c6f4dda-1294-4903-a4c1-6685307c3b25","Type":"ContainerDied","Data":"bd5491f7f0e459da8a3dbed2aac2a653309e10eea252f2f2ba2907a23f2c904e"} Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.785007 4745 generic.go:334] "Generic (PLEG): container finished" podID="6d114857-b077-4798-b578-b9a15645d31f" containerID="68db3ea37b482395445dd0ac418e32ce5fddec7cf59150d5fa43cfaa1ce4b73b" exitCode=0 Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.785099 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hw272" event={"ID":"6d114857-b077-4798-b578-b9a15645d31f","Type":"ContainerDied","Data":"68db3ea37b482395445dd0ac418e32ce5fddec7cf59150d5fa43cfaa1ce4b73b"} Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.789962 4745 generic.go:334] "Generic (PLEG): container finished" podID="341a8942-834f-4f76-8269-7ecdecaaa1b0" containerID="b0916971a50047bc3ecf82a5e73970103b735a529c0ef23324cbb90cbed42099" exitCode=0 Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.790003 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzw6b" event={"ID":"341a8942-834f-4f76-8269-7ecdecaaa1b0","Type":"ContainerDied","Data":"b0916971a50047bc3ecf82a5e73970103b735a529c0ef23324cbb90cbed42099"} Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.797631 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxrmm\" (UniqueName: \"kubernetes.io/projected/b077012b-6cdc-4a9a-85ec-4d9f0f59dce1-kube-api-access-lxrmm\") pod \"marketplace-operator-79b997595-fdwrb\" (UID: \"b077012b-6cdc-4a9a-85ec-4d9f0f59dce1\") " pod="openshift-marketplace/marketplace-operator-79b997595-fdwrb" Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.797932 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b077012b-6cdc-4a9a-85ec-4d9f0f59dce1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fdwrb\" (UID: \"b077012b-6cdc-4a9a-85ec-4d9f0f59dce1\") " pod="openshift-marketplace/marketplace-operator-79b997595-fdwrb" Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.797994 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b077012b-6cdc-4a9a-85ec-4d9f0f59dce1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fdwrb\" (UID: \"b077012b-6cdc-4a9a-85ec-4d9f0f59dce1\") " pod="openshift-marketplace/marketplace-operator-79b997595-fdwrb" Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.800234 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b077012b-6cdc-4a9a-85ec-4d9f0f59dce1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fdwrb\" (UID: \"b077012b-6cdc-4a9a-85ec-4d9f0f59dce1\") " pod="openshift-marketplace/marketplace-operator-79b997595-fdwrb" Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.808788 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b077012b-6cdc-4a9a-85ec-4d9f0f59dce1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fdwrb\" (UID: \"b077012b-6cdc-4a9a-85ec-4d9f0f59dce1\") " pod="openshift-marketplace/marketplace-operator-79b997595-fdwrb" Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.815406 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxrmm\" (UniqueName: \"kubernetes.io/projected/b077012b-6cdc-4a9a-85ec-4d9f0f59dce1-kube-api-access-lxrmm\") pod \"marketplace-operator-79b997595-fdwrb\" (UID: \"b077012b-6cdc-4a9a-85ec-4d9f0f59dce1\") " pod="openshift-marketplace/marketplace-operator-79b997595-fdwrb" Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.867468 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fdwrb" Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.966984 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.967285 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.967332 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.967974 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99d92afaff4d5da46033fc226ce0aba0f0ba990de6f690349b869b38b7d1aea9"} pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 12:19:35 crc kubenswrapper[4745]: I0127 12:19:35.968023 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" containerID="cri-o://99d92afaff4d5da46033fc226ce0aba0f0ba990de6f690349b869b38b7d1aea9" gracePeriod=600 Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.161121 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hw272" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.248777 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d89fb9756-tv87j"] Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.249745 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" podUID="710e65f7-7d7e-4c16-bd4a-f4f6fa02b154" containerName="controller-manager" containerID="cri-o://f93e8df13a1cf48610ecea55df0c4a1a28d5b29fa726e08891ac7996637d989d" gracePeriod=30 Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.281483 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.283520 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw"] Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.283732 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" podUID="ee5aa348-6635-441e-99aa-0c4776381dd1" containerName="route-controller-manager" containerID="cri-o://631a85af1321da1940966aaebf694339643f7c6c0ff5d6459e0f3034aa3a0244" gracePeriod=30 Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.301045 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9tkgm" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.310469 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d114857-b077-4798-b578-b9a15645d31f-catalog-content\") pod \"6d114857-b077-4798-b578-b9a15645d31f\" (UID: \"6d114857-b077-4798-b578-b9a15645d31f\") " Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.310540 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d114857-b077-4798-b578-b9a15645d31f-utilities\") pod \"6d114857-b077-4798-b578-b9a15645d31f\" (UID: \"6d114857-b077-4798-b578-b9a15645d31f\") " Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.310655 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfrhw\" (UniqueName: \"kubernetes.io/projected/6d114857-b077-4798-b578-b9a15645d31f-kube-api-access-kfrhw\") pod \"6d114857-b077-4798-b578-b9a15645d31f\" (UID: \"6d114857-b077-4798-b578-b9a15645d31f\") " Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.319778 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d114857-b077-4798-b578-b9a15645d31f-utilities" (OuterVolumeSpecName: "utilities") pod "6d114857-b077-4798-b578-b9a15645d31f" (UID: "6d114857-b077-4798-b578-b9a15645d31f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.342067 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d114857-b077-4798-b578-b9a15645d31f-kube-api-access-kfrhw" (OuterVolumeSpecName: "kube-api-access-kfrhw") pod "6d114857-b077-4798-b578-b9a15645d31f" (UID: "6d114857-b077-4798-b578-b9a15645d31f"). InnerVolumeSpecName "kube-api-access-kfrhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.361535 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d114857-b077-4798-b578-b9a15645d31f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d114857-b077-4798-b578-b9a15645d31f" (UID: "6d114857-b077-4798-b578-b9a15645d31f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.411969 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff89667-3b76-4571-a07b-d43bce0a2e5b-utilities\") pod \"7ff89667-3b76-4571-a07b-d43bce0a2e5b\" (UID: \"7ff89667-3b76-4571-a07b-d43bce0a2e5b\") " Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.412128 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cd78bf5-69f3-4074-9dea-c7a459de6d4d-marketplace-trusted-ca\") pod \"5cd78bf5-69f3-4074-9dea-c7a459de6d4d\" (UID: \"5cd78bf5-69f3-4074-9dea-c7a459de6d4d\") " Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.412188 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff89667-3b76-4571-a07b-d43bce0a2e5b-catalog-content\") pod \"7ff89667-3b76-4571-a07b-d43bce0a2e5b\" (UID: \"7ff89667-3b76-4571-a07b-d43bce0a2e5b\") " Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.412218 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5cd78bf5-69f3-4074-9dea-c7a459de6d4d-marketplace-operator-metrics\") pod \"5cd78bf5-69f3-4074-9dea-c7a459de6d4d\" (UID: \"5cd78bf5-69f3-4074-9dea-c7a459de6d4d\") " Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.412314 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-224b6\" (UniqueName: \"kubernetes.io/projected/7ff89667-3b76-4571-a07b-d43bce0a2e5b-kube-api-access-224b6\") pod \"7ff89667-3b76-4571-a07b-d43bce0a2e5b\" (UID: \"7ff89667-3b76-4571-a07b-d43bce0a2e5b\") " Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.412345 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkwbv\" (UniqueName: \"kubernetes.io/projected/5cd78bf5-69f3-4074-9dea-c7a459de6d4d-kube-api-access-zkwbv\") pod \"5cd78bf5-69f3-4074-9dea-c7a459de6d4d\" (UID: \"5cd78bf5-69f3-4074-9dea-c7a459de6d4d\") " Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.412874 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfrhw\" (UniqueName: \"kubernetes.io/projected/6d114857-b077-4798-b578-b9a15645d31f-kube-api-access-kfrhw\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.412962 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d114857-b077-4798-b578-b9a15645d31f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.412998 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d114857-b077-4798-b578-b9a15645d31f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.414239 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ff89667-3b76-4571-a07b-d43bce0a2e5b-utilities" (OuterVolumeSpecName: "utilities") pod "7ff89667-3b76-4571-a07b-d43bce0a2e5b" (UID: "7ff89667-3b76-4571-a07b-d43bce0a2e5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.414357 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cd78bf5-69f3-4074-9dea-c7a459de6d4d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "5cd78bf5-69f3-4074-9dea-c7a459de6d4d" (UID: "5cd78bf5-69f3-4074-9dea-c7a459de6d4d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.416245 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fdwrb"] Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.417593 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff89667-3b76-4571-a07b-d43bce0a2e5b-kube-api-access-224b6" (OuterVolumeSpecName: "kube-api-access-224b6") pod "7ff89667-3b76-4571-a07b-d43bce0a2e5b" (UID: "7ff89667-3b76-4571-a07b-d43bce0a2e5b"). InnerVolumeSpecName "kube-api-access-224b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.417666 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cd78bf5-69f3-4074-9dea-c7a459de6d4d-kube-api-access-zkwbv" (OuterVolumeSpecName: "kube-api-access-zkwbv") pod "5cd78bf5-69f3-4074-9dea-c7a459de6d4d" (UID: "5cd78bf5-69f3-4074-9dea-c7a459de6d4d"). InnerVolumeSpecName "kube-api-access-zkwbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.418286 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd78bf5-69f3-4074-9dea-c7a459de6d4d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "5cd78bf5-69f3-4074-9dea-c7a459de6d4d" (UID: "5cd78bf5-69f3-4074-9dea-c7a459de6d4d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:19:36 crc kubenswrapper[4745]: W0127 12:19:36.469621 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb077012b_6cdc_4a9a_85ec_4d9f0f59dce1.slice/crio-7e4720fe92e18429be987ea6afec5eba6c8c16b6ea89478be7bd97d00b6384c4 WatchSource:0}: Error finding container 7e4720fe92e18429be987ea6afec5eba6c8c16b6ea89478be7bd97d00b6384c4: Status 404 returned error can't find the container with id 7e4720fe92e18429be987ea6afec5eba6c8c16b6ea89478be7bd97d00b6384c4 Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.514112 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-224b6\" (UniqueName: \"kubernetes.io/projected/7ff89667-3b76-4571-a07b-d43bce0a2e5b-kube-api-access-224b6\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.514143 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkwbv\" (UniqueName: \"kubernetes.io/projected/5cd78bf5-69f3-4074-9dea-c7a459de6d4d-kube-api-access-zkwbv\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.514158 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff89667-3b76-4571-a07b-d43bce0a2e5b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.514172 4745 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cd78bf5-69f3-4074-9dea-c7a459de6d4d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.514183 4745 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5cd78bf5-69f3-4074-9dea-c7a459de6d4d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.612409 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ff89667-3b76-4571-a07b-d43bce0a2e5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ff89667-3b76-4571-a07b-d43bce0a2e5b" (UID: "7ff89667-3b76-4571-a07b-d43bce0a2e5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.616629 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff89667-3b76-4571-a07b-d43bce0a2e5b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.675521 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7rjtn"] Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.810228 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzw6b" event={"ID":"341a8942-834f-4f76-8269-7ecdecaaa1b0","Type":"ContainerDied","Data":"c24c2497e7279fcfb476fddd3237415918ef081866244f3cd8fa278ee8f60478"} Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.810296 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c24c2497e7279fcfb476fddd3237415918ef081866244f3cd8fa278ee8f60478" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.812498 4745 generic.go:334] "Generic (PLEG): container finished" podID="ee5aa348-6635-441e-99aa-0c4776381dd1" containerID="631a85af1321da1940966aaebf694339643f7c6c0ff5d6459e0f3034aa3a0244" exitCode=0 Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.812575 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" event={"ID":"ee5aa348-6635-441e-99aa-0c4776381dd1","Type":"ContainerDied","Data":"631a85af1321da1940966aaebf694339643f7c6c0ff5d6459e0f3034aa3a0244"} Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.814474 4745 generic.go:334] "Generic (PLEG): container finished" podID="710e65f7-7d7e-4c16-bd4a-f4f6fa02b154" containerID="f93e8df13a1cf48610ecea55df0c4a1a28d5b29fa726e08891ac7996637d989d" exitCode=0 Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.814508 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" event={"ID":"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154","Type":"ContainerDied","Data":"f93e8df13a1cf48610ecea55df0c4a1a28d5b29fa726e08891ac7996637d989d"} Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.815794 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fdwrb" event={"ID":"b077012b-6cdc-4a9a-85ec-4d9f0f59dce1","Type":"ContainerStarted","Data":"7e4720fe92e18429be987ea6afec5eba6c8c16b6ea89478be7bd97d00b6384c4"} Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.838201 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" event={"ID":"5cd78bf5-69f3-4074-9dea-c7a459de6d4d","Type":"ContainerDied","Data":"220344e56572558fd8f8fb94fecabca5e92e4b873bb794a6db80ce7ea188431a"} Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.838251 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jfk2x" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.838306 4745 scope.go:117] "RemoveContainer" containerID="a887876be0a0983d29839bc5e0ebb9444857efcbdcc08aa3c88f84695524ef4d" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.862647 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tkgm" event={"ID":"7ff89667-3b76-4571-a07b-d43bce0a2e5b","Type":"ContainerDied","Data":"f1fa1b7133677d5e9717cc2b4350e0c3c9e919f3dfd146e9bded94f918c61302"} Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.862792 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9tkgm" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.867191 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tzw6b" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.879595 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hw272" event={"ID":"6d114857-b077-4798-b578-b9a15645d31f","Type":"ContainerDied","Data":"ab5b4953a8f2277dad071de4e8ebf65992d6f11b90522eb0ef4d900374cf36da"} Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.879769 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hw272" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.883689 4745 scope.go:117] "RemoveContainer" containerID="d3a160686ccda655c81e74bd9d33a37463d91ee5e8ab70e9a12d11197101634e" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.897221 4745 generic.go:334] "Generic (PLEG): container finished" podID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerID="99d92afaff4d5da46033fc226ce0aba0f0ba990de6f690349b869b38b7d1aea9" exitCode=0 Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.897278 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerDied","Data":"99d92afaff4d5da46033fc226ce0aba0f0ba990de6f690349b869b38b7d1aea9"} Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.945128 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfk2x"] Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.950300 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfk2x"] Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.967215 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9tkgm"] Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.968167 4745 scope.go:117] "RemoveContainer" containerID="08efc721326ad7b1afe48e8eebdf9c75e8b377863f7f8e6ceb30bcd2332d42a9" Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.973795 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9tkgm"] Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.980348 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hw272"] Jan 27 12:19:36 crc kubenswrapper[4745]: I0127 12:19:36.984640 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hw272"] Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.039984 4745 scope.go:117] "RemoveContainer" containerID="36deef2649ebaf45ebd297dfb2f2ec9a31c6fa227ac0d1a57bafa1292007315d" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.040757 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccqcz\" (UniqueName: \"kubernetes.io/projected/341a8942-834f-4f76-8269-7ecdecaaa1b0-kube-api-access-ccqcz\") pod \"341a8942-834f-4f76-8269-7ecdecaaa1b0\" (UID: \"341a8942-834f-4f76-8269-7ecdecaaa1b0\") " Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.040849 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/341a8942-834f-4f76-8269-7ecdecaaa1b0-catalog-content\") pod \"341a8942-834f-4f76-8269-7ecdecaaa1b0\" (UID: \"341a8942-834f-4f76-8269-7ecdecaaa1b0\") " Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.040869 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/341a8942-834f-4f76-8269-7ecdecaaa1b0-utilities\") pod \"341a8942-834f-4f76-8269-7ecdecaaa1b0\" (UID: \"341a8942-834f-4f76-8269-7ecdecaaa1b0\") " Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.041853 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/341a8942-834f-4f76-8269-7ecdecaaa1b0-utilities" (OuterVolumeSpecName: "utilities") pod "341a8942-834f-4f76-8269-7ecdecaaa1b0" (UID: "341a8942-834f-4f76-8269-7ecdecaaa1b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.046675 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/341a8942-834f-4f76-8269-7ecdecaaa1b0-kube-api-access-ccqcz" (OuterVolumeSpecName: "kube-api-access-ccqcz") pod "341a8942-834f-4f76-8269-7ecdecaaa1b0" (UID: "341a8942-834f-4f76-8269-7ecdecaaa1b0"). InnerVolumeSpecName "kube-api-access-ccqcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.059653 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmx2n" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.079122 4745 scope.go:117] "RemoveContainer" containerID="68db3ea37b482395445dd0ac418e32ce5fddec7cf59150d5fa43cfaa1ce4b73b" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.128717 4745 scope.go:117] "RemoveContainer" containerID="0f5d8dc9b636a5e3f071e28dc44f9c33f273e76715d4cb5c5008f733fcf569ee" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.142089 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccqcz\" (UniqueName: \"kubernetes.io/projected/341a8942-834f-4f76-8269-7ecdecaaa1b0-kube-api-access-ccqcz\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.142124 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/341a8942-834f-4f76-8269-7ecdecaaa1b0-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.142745 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/341a8942-834f-4f76-8269-7ecdecaaa1b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "341a8942-834f-4f76-8269-7ecdecaaa1b0" (UID: "341a8942-834f-4f76-8269-7ecdecaaa1b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.152846 4745 scope.go:117] "RemoveContainer" containerID="b0fba6408b0f57c8898eb7f4cac3b045069835e3b9de3b1e38d096a46bacd018" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.181618 4745 scope.go:117] "RemoveContainer" containerID="d3d3ab911a32e7166fa39d2c131b1289662b613ba6b37a9ff53ed747d8262865" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.243243 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxf9r\" (UniqueName: \"kubernetes.io/projected/7c6f4dda-1294-4903-a4c1-6685307c3b25-kube-api-access-pxf9r\") pod \"7c6f4dda-1294-4903-a4c1-6685307c3b25\" (UID: \"7c6f4dda-1294-4903-a4c1-6685307c3b25\") " Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.243314 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c6f4dda-1294-4903-a4c1-6685307c3b25-catalog-content\") pod \"7c6f4dda-1294-4903-a4c1-6685307c3b25\" (UID: \"7c6f4dda-1294-4903-a4c1-6685307c3b25\") " Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.243344 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c6f4dda-1294-4903-a4c1-6685307c3b25-utilities\") pod \"7c6f4dda-1294-4903-a4c1-6685307c3b25\" (UID: \"7c6f4dda-1294-4903-a4c1-6685307c3b25\") " Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.243630 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/341a8942-834f-4f76-8269-7ecdecaaa1b0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.245371 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c6f4dda-1294-4903-a4c1-6685307c3b25-utilities" (OuterVolumeSpecName: "utilities") pod "7c6f4dda-1294-4903-a4c1-6685307c3b25" (UID: "7c6f4dda-1294-4903-a4c1-6685307c3b25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.246380 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c6f4dda-1294-4903-a4c1-6685307c3b25-kube-api-access-pxf9r" (OuterVolumeSpecName: "kube-api-access-pxf9r") pod "7c6f4dda-1294-4903-a4c1-6685307c3b25" (UID: "7c6f4dda-1294-4903-a4c1-6685307c3b25"). InnerVolumeSpecName "kube-api-access-pxf9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.287299 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c6f4dda-1294-4903-a4c1-6685307c3b25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c6f4dda-1294-4903-a4c1-6685307c3b25" (UID: "7c6f4dda-1294-4903-a4c1-6685307c3b25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.344604 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxf9r\" (UniqueName: \"kubernetes.io/projected/7c6f4dda-1294-4903-a4c1-6685307c3b25-kube-api-access-pxf9r\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.344639 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c6f4dda-1294-4903-a4c1-6685307c3b25-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.344649 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c6f4dda-1294-4903-a4c1-6685307c3b25-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.441984 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.445195 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-client-ca\") pod \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\" (UID: \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\") " Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.445272 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-proxy-ca-bundles\") pod \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\" (UID: \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\") " Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.445299 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-serving-cert\") pod \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\" (UID: \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\") " Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.445327 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8txr4\" (UniqueName: \"kubernetes.io/projected/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-kube-api-access-8txr4\") pod \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\" (UID: \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\") " Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.445350 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-config\") pod \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\" (UID: \"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154\") " Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.446029 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-client-ca" (OuterVolumeSpecName: "client-ca") pod "710e65f7-7d7e-4c16-bd4a-f4f6fa02b154" (UID: "710e65f7-7d7e-4c16-bd4a-f4f6fa02b154"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.446039 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "710e65f7-7d7e-4c16-bd4a-f4f6fa02b154" (UID: "710e65f7-7d7e-4c16-bd4a-f4f6fa02b154"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.446089 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-config" (OuterVolumeSpecName: "config") pod "710e65f7-7d7e-4c16-bd4a-f4f6fa02b154" (UID: "710e65f7-7d7e-4c16-bd4a-f4f6fa02b154"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.449623 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "710e65f7-7d7e-4c16-bd4a-f4f6fa02b154" (UID: "710e65f7-7d7e-4c16-bd4a-f4f6fa02b154"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.450026 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-kube-api-access-8txr4" (OuterVolumeSpecName: "kube-api-access-8txr4") pod "710e65f7-7d7e-4c16-bd4a-f4f6fa02b154" (UID: "710e65f7-7d7e-4c16-bd4a-f4f6fa02b154"). InnerVolumeSpecName "kube-api-access-8txr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.546277 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.546570 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.546581 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8txr4\" (UniqueName: \"kubernetes.io/projected/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-kube-api-access-8txr4\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.546591 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.546602 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.721378 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sp442"] Jan 27 12:19:37 crc kubenswrapper[4745]: E0127 12:19:37.721571 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d114857-b077-4798-b578-b9a15645d31f" containerName="registry-server" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.721581 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d114857-b077-4798-b578-b9a15645d31f" containerName="registry-server" Jan 27 12:19:37 crc kubenswrapper[4745]: E0127 12:19:37.721592 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341a8942-834f-4f76-8269-7ecdecaaa1b0" containerName="extract-content" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.721598 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="341a8942-834f-4f76-8269-7ecdecaaa1b0" containerName="extract-content" Jan 27 12:19:37 crc kubenswrapper[4745]: E0127 12:19:37.721608 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" containerName="registry-server" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.721613 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" containerName="registry-server" Jan 27 12:19:37 crc kubenswrapper[4745]: E0127 12:19:37.721623 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d114857-b077-4798-b578-b9a15645d31f" containerName="extract-utilities" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.721629 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d114857-b077-4798-b578-b9a15645d31f" containerName="extract-utilities" Jan 27 12:19:37 crc kubenswrapper[4745]: E0127 12:19:37.721636 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" containerName="extract-content" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.721641 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" containerName="extract-content" Jan 27 12:19:37 crc kubenswrapper[4745]: E0127 12:19:37.721651 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341a8942-834f-4f76-8269-7ecdecaaa1b0" containerName="registry-server" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.721657 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="341a8942-834f-4f76-8269-7ecdecaaa1b0" containerName="registry-server" Jan 27 12:19:37 crc kubenswrapper[4745]: E0127 12:19:37.721663 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d114857-b077-4798-b578-b9a15645d31f" containerName="extract-content" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.721668 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d114857-b077-4798-b578-b9a15645d31f" containerName="extract-content" Jan 27 12:19:37 crc kubenswrapper[4745]: E0127 12:19:37.721677 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="710e65f7-7d7e-4c16-bd4a-f4f6fa02b154" containerName="controller-manager" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.721683 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="710e65f7-7d7e-4c16-bd4a-f4f6fa02b154" containerName="controller-manager" Jan 27 12:19:37 crc kubenswrapper[4745]: E0127 12:19:37.721692 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd78bf5-69f3-4074-9dea-c7a459de6d4d" containerName="marketplace-operator" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.721697 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd78bf5-69f3-4074-9dea-c7a459de6d4d" containerName="marketplace-operator" Jan 27 12:19:37 crc kubenswrapper[4745]: E0127 12:19:37.721703 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341a8942-834f-4f76-8269-7ecdecaaa1b0" containerName="extract-utilities" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.721709 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="341a8942-834f-4f76-8269-7ecdecaaa1b0" containerName="extract-utilities" Jan 27 12:19:37 crc kubenswrapper[4745]: E0127 12:19:37.721715 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6f4dda-1294-4903-a4c1-6685307c3b25" containerName="extract-content" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.721720 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6f4dda-1294-4903-a4c1-6685307c3b25" containerName="extract-content" Jan 27 12:19:37 crc kubenswrapper[4745]: E0127 12:19:37.721728 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6f4dda-1294-4903-a4c1-6685307c3b25" containerName="extract-utilities" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.721735 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6f4dda-1294-4903-a4c1-6685307c3b25" containerName="extract-utilities" Jan 27 12:19:37 crc kubenswrapper[4745]: E0127 12:19:37.721741 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" containerName="extract-utilities" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.721746 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" containerName="extract-utilities" Jan 27 12:19:37 crc kubenswrapper[4745]: E0127 12:19:37.721752 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6f4dda-1294-4903-a4c1-6685307c3b25" containerName="registry-server" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.721757 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6f4dda-1294-4903-a4c1-6685307c3b25" containerName="registry-server" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.721846 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d114857-b077-4798-b578-b9a15645d31f" containerName="registry-server" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.721858 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="710e65f7-7d7e-4c16-bd4a-f4f6fa02b154" containerName="controller-manager" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.721867 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd78bf5-69f3-4074-9dea-c7a459de6d4d" containerName="marketplace-operator" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.721874 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="341a8942-834f-4f76-8269-7ecdecaaa1b0" containerName="registry-server" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.721884 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" containerName="registry-server" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.721895 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c6f4dda-1294-4903-a4c1-6685307c3b25" containerName="registry-server" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.722507 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sp442" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.724767 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.742131 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sp442"] Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.748076 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjxnh\" (UniqueName: \"kubernetes.io/projected/502a401d-0f57-4a44-a241-a6150f1e3c48-kube-api-access-rjxnh\") pod \"redhat-marketplace-sp442\" (UID: \"502a401d-0f57-4a44-a241-a6150f1e3c48\") " pod="openshift-marketplace/redhat-marketplace-sp442" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.748140 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502a401d-0f57-4a44-a241-a6150f1e3c48-catalog-content\") pod \"redhat-marketplace-sp442\" (UID: \"502a401d-0f57-4a44-a241-a6150f1e3c48\") " pod="openshift-marketplace/redhat-marketplace-sp442" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.748224 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502a401d-0f57-4a44-a241-a6150f1e3c48-utilities\") pod \"redhat-marketplace-sp442\" (UID: \"502a401d-0f57-4a44-a241-a6150f1e3c48\") " pod="openshift-marketplace/redhat-marketplace-sp442" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.849274 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502a401d-0f57-4a44-a241-a6150f1e3c48-utilities\") pod \"redhat-marketplace-sp442\" (UID: \"502a401d-0f57-4a44-a241-a6150f1e3c48\") " pod="openshift-marketplace/redhat-marketplace-sp442" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.849642 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjxnh\" (UniqueName: \"kubernetes.io/projected/502a401d-0f57-4a44-a241-a6150f1e3c48-kube-api-access-rjxnh\") pod \"redhat-marketplace-sp442\" (UID: \"502a401d-0f57-4a44-a241-a6150f1e3c48\") " pod="openshift-marketplace/redhat-marketplace-sp442" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.849842 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502a401d-0f57-4a44-a241-a6150f1e3c48-catalog-content\") pod \"redhat-marketplace-sp442\" (UID: \"502a401d-0f57-4a44-a241-a6150f1e3c48\") " pod="openshift-marketplace/redhat-marketplace-sp442" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.849908 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502a401d-0f57-4a44-a241-a6150f1e3c48-utilities\") pod \"redhat-marketplace-sp442\" (UID: \"502a401d-0f57-4a44-a241-a6150f1e3c48\") " pod="openshift-marketplace/redhat-marketplace-sp442" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.850156 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502a401d-0f57-4a44-a241-a6150f1e3c48-catalog-content\") pod \"redhat-marketplace-sp442\" (UID: \"502a401d-0f57-4a44-a241-a6150f1e3c48\") " pod="openshift-marketplace/redhat-marketplace-sp442" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.867478 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjxnh\" (UniqueName: \"kubernetes.io/projected/502a401d-0f57-4a44-a241-a6150f1e3c48-kube-api-access-rjxnh\") pod \"redhat-marketplace-sp442\" (UID: \"502a401d-0f57-4a44-a241-a6150f1e3c48\") " pod="openshift-marketplace/redhat-marketplace-sp442" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.906737 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-756c9cd684-rr9l6"] Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.908287 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.912620 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-756c9cd684-rr9l6"] Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.922781 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" event={"ID":"710e65f7-7d7e-4c16-bd4a-f4f6fa02b154","Type":"ContainerDied","Data":"8f4045a76e0da0007b79820a172c750e4ed0c5ac463008168ff440933fbad0a6"} Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.923053 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d89fb9756-tv87j" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.928733 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fdwrb" event={"ID":"b077012b-6cdc-4a9a-85ec-4d9f0f59dce1","Type":"ContainerStarted","Data":"ffdf447e8bc64f920bd168a97330c606863b94d0b44255884a7dc64454cf74a9"} Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.929029 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fdwrb" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.933342 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fdwrb" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.950406 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmx2n" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.950431 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmx2n" event={"ID":"7c6f4dda-1294-4903-a4c1-6685307c3b25","Type":"ContainerDied","Data":"1d4ef67a7790b75f979c0229c7f83f6f6bae2efd5df34c4912edc42e58db1048"} Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.951112 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-config\") pod \"controller-manager-756c9cd684-rr9l6\" (UID: \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\") " pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.951146 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-client-ca\") pod \"controller-manager-756c9cd684-rr9l6\" (UID: \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\") " pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.951752 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6lx9\" (UniqueName: \"kubernetes.io/projected/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-kube-api-access-d6lx9\") pod \"controller-manager-756c9cd684-rr9l6\" (UID: \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\") " pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.951787 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-serving-cert\") pod \"controller-manager-756c9cd684-rr9l6\" (UID: \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\") " pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.951852 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-proxy-ca-bundles\") pod \"controller-manager-756c9cd684-rr9l6\" (UID: \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\") " pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.955206 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerStarted","Data":"1d165911e4b0e3c594550d27c4ba050dc9d70726bdb582f151b9ce410a17826f"} Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.958939 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tzw6b" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.962105 4745 scope.go:117] "RemoveContainer" containerID="f93e8df13a1cf48610ecea55df0c4a1a28d5b29fa726e08891ac7996637d989d" Jan 27 12:19:37 crc kubenswrapper[4745]: I0127 12:19:37.972979 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fdwrb" podStartSLOduration=2.972957362 podStartE2EDuration="2.972957362s" podCreationTimestamp="2026-01-27 12:19:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:19:37.969920693 +0000 UTC m=+470.774831401" watchObservedRunningTime="2026-01-27 12:19:37.972957362 +0000 UTC m=+470.777868070" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.014196 4745 scope.go:117] "RemoveContainer" containerID="bd5491f7f0e459da8a3dbed2aac2a653309e10eea252f2f2ba2907a23f2c904e" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.036615 4745 scope.go:117] "RemoveContainer" containerID="3bf365a7e2e65357db9a2fc4d87e2a66330a34323434f16c0c43668f5caa3e08" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.040912 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sp442" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.052547 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-config\") pod \"controller-manager-756c9cd684-rr9l6\" (UID: \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\") " pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.052604 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-client-ca\") pod \"controller-manager-756c9cd684-rr9l6\" (UID: \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\") " pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.052628 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6lx9\" (UniqueName: \"kubernetes.io/projected/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-kube-api-access-d6lx9\") pod \"controller-manager-756c9cd684-rr9l6\" (UID: \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\") " pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.052648 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-serving-cert\") pod \"controller-manager-756c9cd684-rr9l6\" (UID: \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\") " pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.052687 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-proxy-ca-bundles\") pod \"controller-manager-756c9cd684-rr9l6\" (UID: \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\") " pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.054537 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-config\") pod \"controller-manager-756c9cd684-rr9l6\" (UID: \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\") " pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.054980 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-client-ca\") pod \"controller-manager-756c9cd684-rr9l6\" (UID: \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\") " pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.056605 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-proxy-ca-bundles\") pod \"controller-manager-756c9cd684-rr9l6\" (UID: \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\") " pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.066450 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-serving-cert\") pod \"controller-manager-756c9cd684-rr9l6\" (UID: \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\") " pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.070049 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d89fb9756-tv87j"] Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.073324 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7d89fb9756-tv87j"] Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.075196 4745 scope.go:117] "RemoveContainer" containerID="4c36cc87e141e730d84c81478b435a07113a1804c5bf5778526ff0a68e8c51d7" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.078323 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6lx9\" (UniqueName: \"kubernetes.io/projected/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-kube-api-access-d6lx9\") pod \"controller-manager-756c9cd684-rr9l6\" (UID: \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\") " pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.094803 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cd78bf5-69f3-4074-9dea-c7a459de6d4d" path="/var/lib/kubelet/pods/5cd78bf5-69f3-4074-9dea-c7a459de6d4d/volumes" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.095368 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d114857-b077-4798-b578-b9a15645d31f" path="/var/lib/kubelet/pods/6d114857-b077-4798-b578-b9a15645d31f/volumes" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.096119 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="710e65f7-7d7e-4c16-bd4a-f4f6fa02b154" path="/var/lib/kubelet/pods/710e65f7-7d7e-4c16-bd4a-f4f6fa02b154/volumes" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.097060 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff89667-3b76-4571-a07b-d43bce0a2e5b" path="/var/lib/kubelet/pods/7ff89667-3b76-4571-a07b-d43bce0a2e5b/volumes" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.097513 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tzw6b"] Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.097556 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tzw6b"] Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.097571 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bmx2n"] Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.098629 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bmx2n"] Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.131385 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.154221 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee5aa348-6635-441e-99aa-0c4776381dd1-client-ca\") pod \"ee5aa348-6635-441e-99aa-0c4776381dd1\" (UID: \"ee5aa348-6635-441e-99aa-0c4776381dd1\") " Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.154349 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee5aa348-6635-441e-99aa-0c4776381dd1-serving-cert\") pod \"ee5aa348-6635-441e-99aa-0c4776381dd1\" (UID: \"ee5aa348-6635-441e-99aa-0c4776381dd1\") " Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.154386 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee5aa348-6635-441e-99aa-0c4776381dd1-config\") pod \"ee5aa348-6635-441e-99aa-0c4776381dd1\" (UID: \"ee5aa348-6635-441e-99aa-0c4776381dd1\") " Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.154454 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whffb\" (UniqueName: \"kubernetes.io/projected/ee5aa348-6635-441e-99aa-0c4776381dd1-kube-api-access-whffb\") pod \"ee5aa348-6635-441e-99aa-0c4776381dd1\" (UID: \"ee5aa348-6635-441e-99aa-0c4776381dd1\") " Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.155201 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee5aa348-6635-441e-99aa-0c4776381dd1-client-ca" (OuterVolumeSpecName: "client-ca") pod "ee5aa348-6635-441e-99aa-0c4776381dd1" (UID: "ee5aa348-6635-441e-99aa-0c4776381dd1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.155275 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee5aa348-6635-441e-99aa-0c4776381dd1-config" (OuterVolumeSpecName: "config") pod "ee5aa348-6635-441e-99aa-0c4776381dd1" (UID: "ee5aa348-6635-441e-99aa-0c4776381dd1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.155585 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee5aa348-6635-441e-99aa-0c4776381dd1-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.155607 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee5aa348-6635-441e-99aa-0c4776381dd1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.158776 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee5aa348-6635-441e-99aa-0c4776381dd1-kube-api-access-whffb" (OuterVolumeSpecName: "kube-api-access-whffb") pod "ee5aa348-6635-441e-99aa-0c4776381dd1" (UID: "ee5aa348-6635-441e-99aa-0c4776381dd1"). InnerVolumeSpecName "kube-api-access-whffb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.161012 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee5aa348-6635-441e-99aa-0c4776381dd1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ee5aa348-6635-441e-99aa-0c4776381dd1" (UID: "ee5aa348-6635-441e-99aa-0c4776381dd1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.254236 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.256615 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee5aa348-6635-441e-99aa-0c4776381dd1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.256785 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whffb\" (UniqueName: \"kubernetes.io/projected/ee5aa348-6635-441e-99aa-0c4776381dd1-kube-api-access-whffb\") on node \"crc\" DevicePath \"\"" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.461013 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sp442"] Jan 27 12:19:38 crc kubenswrapper[4745]: W0127 12:19:38.468158 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod502a401d_0f57_4a44_a241_a6150f1e3c48.slice/crio-0542c7c5551cff412bb2324b75b423ab8d062d01ebae4f3fcfa7eb976aa29a3e WatchSource:0}: Error finding container 0542c7c5551cff412bb2324b75b423ab8d062d01ebae4f3fcfa7eb976aa29a3e: Status 404 returned error can't find the container with id 0542c7c5551cff412bb2324b75b423ab8d062d01ebae4f3fcfa7eb976aa29a3e Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.660906 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-756c9cd684-rr9l6"] Jan 27 12:19:38 crc kubenswrapper[4745]: W0127 12:19:38.668048 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb96cf09a_c9fc_4831_ac5f_6f41fc66b348.slice/crio-74b30ec62584c6cb87096eb60625551bc78ed234878c99dac811e34b0fcb375b WatchSource:0}: Error finding container 74b30ec62584c6cb87096eb60625551bc78ed234878c99dac811e34b0fcb375b: Status 404 returned error can't find the container with id 74b30ec62584c6cb87096eb60625551bc78ed234878c99dac811e34b0fcb375b Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.968125 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" event={"ID":"b96cf09a-c9fc-4831-ac5f-6f41fc66b348","Type":"ContainerStarted","Data":"74b30ec62584c6cb87096eb60625551bc78ed234878c99dac811e34b0fcb375b"} Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.972014 4745 generic.go:334] "Generic (PLEG): container finished" podID="502a401d-0f57-4a44-a241-a6150f1e3c48" containerID="bfa55c584db7da7b74a39b335ef1cab2a5e556e017d7711dad811f52ba678d0b" exitCode=0 Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.972070 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sp442" event={"ID":"502a401d-0f57-4a44-a241-a6150f1e3c48","Type":"ContainerDied","Data":"bfa55c584db7da7b74a39b335ef1cab2a5e556e017d7711dad811f52ba678d0b"} Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.972093 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sp442" event={"ID":"502a401d-0f57-4a44-a241-a6150f1e3c48","Type":"ContainerStarted","Data":"0542c7c5551cff412bb2324b75b423ab8d062d01ebae4f3fcfa7eb976aa29a3e"} Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.982499 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.983050 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw" event={"ID":"ee5aa348-6635-441e-99aa-0c4776381dd1","Type":"ContainerDied","Data":"be261dd94dd9119f8a98585393da269b3b59cd42d85a2ce41902d3add44d5f93"} Jan 27 12:19:38 crc kubenswrapper[4745]: I0127 12:19:38.983089 4745 scope.go:117] "RemoveContainer" containerID="631a85af1321da1940966aaebf694339643f7c6c0ff5d6459e0f3034aa3a0244" Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.012720 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw"] Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.016699 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75b76566b9-kv5qw"] Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.898365 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk"] Jan 27 12:19:39 crc kubenswrapper[4745]: E0127 12:19:39.898945 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5aa348-6635-441e-99aa-0c4776381dd1" containerName="route-controller-manager" Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.898966 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5aa348-6635-441e-99aa-0c4776381dd1" containerName="route-controller-manager" Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.899075 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee5aa348-6635-441e-99aa-0c4776381dd1" containerName="route-controller-manager" Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.900432 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.902763 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.903129 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.903190 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.903290 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.903582 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.906442 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.910029 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk"] Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.925305 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qmb5t"] Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.928390 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qmb5t" Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.930911 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.942276 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qmb5t"] Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.979484 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/681c7580-eb96-4022-9795-cd4306094a03-catalog-content\") pod \"community-operators-qmb5t\" (UID: \"681c7580-eb96-4022-9795-cd4306094a03\") " pod="openshift-marketplace/community-operators-qmb5t" Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.979614 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg4rm\" (UniqueName: \"kubernetes.io/projected/e0e7e64e-4982-4725-99c0-e1240d177ea4-kube-api-access-tg4rm\") pod \"route-controller-manager-58c9cdf859-lpcsk\" (UID: \"e0e7e64e-4982-4725-99c0-e1240d177ea4\") " pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.979777 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdchf\" (UniqueName: \"kubernetes.io/projected/681c7580-eb96-4022-9795-cd4306094a03-kube-api-access-bdchf\") pod \"community-operators-qmb5t\" (UID: \"681c7580-eb96-4022-9795-cd4306094a03\") " pod="openshift-marketplace/community-operators-qmb5t" Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.979910 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0e7e64e-4982-4725-99c0-e1240d177ea4-client-ca\") pod \"route-controller-manager-58c9cdf859-lpcsk\" (UID: \"e0e7e64e-4982-4725-99c0-e1240d177ea4\") " pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.979939 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0e7e64e-4982-4725-99c0-e1240d177ea4-serving-cert\") pod \"route-controller-manager-58c9cdf859-lpcsk\" (UID: \"e0e7e64e-4982-4725-99c0-e1240d177ea4\") " pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.979962 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/681c7580-eb96-4022-9795-cd4306094a03-utilities\") pod \"community-operators-qmb5t\" (UID: \"681c7580-eb96-4022-9795-cd4306094a03\") " pod="openshift-marketplace/community-operators-qmb5t" Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.979985 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0e7e64e-4982-4725-99c0-e1240d177ea4-config\") pod \"route-controller-manager-58c9cdf859-lpcsk\" (UID: \"e0e7e64e-4982-4725-99c0-e1240d177ea4\") " pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.992174 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" event={"ID":"b96cf09a-c9fc-4831-ac5f-6f41fc66b348","Type":"ContainerStarted","Data":"8c946c80ddadfa1c9cad62e31f4e1f8669e91ab8509d759d47b7f4e2c87aec49"} Jan 27 12:19:39 crc kubenswrapper[4745]: I0127 12:19:39.992546 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.007533 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.016936 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" podStartSLOduration=4.016918742 podStartE2EDuration="4.016918742s" podCreationTimestamp="2026-01-27 12:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:19:40.015560052 +0000 UTC m=+472.820470740" watchObservedRunningTime="2026-01-27 12:19:40.016918742 +0000 UTC m=+472.821829430" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.080366 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg4rm\" (UniqueName: \"kubernetes.io/projected/e0e7e64e-4982-4725-99c0-e1240d177ea4-kube-api-access-tg4rm\") pod \"route-controller-manager-58c9cdf859-lpcsk\" (UID: \"e0e7e64e-4982-4725-99c0-e1240d177ea4\") " pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.080422 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdchf\" (UniqueName: \"kubernetes.io/projected/681c7580-eb96-4022-9795-cd4306094a03-kube-api-access-bdchf\") pod \"community-operators-qmb5t\" (UID: \"681c7580-eb96-4022-9795-cd4306094a03\") " pod="openshift-marketplace/community-operators-qmb5t" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.080478 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0e7e64e-4982-4725-99c0-e1240d177ea4-client-ca\") pod \"route-controller-manager-58c9cdf859-lpcsk\" (UID: \"e0e7e64e-4982-4725-99c0-e1240d177ea4\") " pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.080503 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0e7e64e-4982-4725-99c0-e1240d177ea4-serving-cert\") pod \"route-controller-manager-58c9cdf859-lpcsk\" (UID: \"e0e7e64e-4982-4725-99c0-e1240d177ea4\") " pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.080520 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/681c7580-eb96-4022-9795-cd4306094a03-utilities\") pod \"community-operators-qmb5t\" (UID: \"681c7580-eb96-4022-9795-cd4306094a03\") " pod="openshift-marketplace/community-operators-qmb5t" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.080547 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0e7e64e-4982-4725-99c0-e1240d177ea4-config\") pod \"route-controller-manager-58c9cdf859-lpcsk\" (UID: \"e0e7e64e-4982-4725-99c0-e1240d177ea4\") " pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.080599 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/681c7580-eb96-4022-9795-cd4306094a03-catalog-content\") pod \"community-operators-qmb5t\" (UID: \"681c7580-eb96-4022-9795-cd4306094a03\") " pod="openshift-marketplace/community-operators-qmb5t" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.081767 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0e7e64e-4982-4725-99c0-e1240d177ea4-client-ca\") pod \"route-controller-manager-58c9cdf859-lpcsk\" (UID: \"e0e7e64e-4982-4725-99c0-e1240d177ea4\") " pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.082716 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0e7e64e-4982-4725-99c0-e1240d177ea4-config\") pod \"route-controller-manager-58c9cdf859-lpcsk\" (UID: \"e0e7e64e-4982-4725-99c0-e1240d177ea4\") " pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.090417 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="341a8942-834f-4f76-8269-7ecdecaaa1b0" path="/var/lib/kubelet/pods/341a8942-834f-4f76-8269-7ecdecaaa1b0/volumes" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.091138 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c6f4dda-1294-4903-a4c1-6685307c3b25" path="/var/lib/kubelet/pods/7c6f4dda-1294-4903-a4c1-6685307c3b25/volumes" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.091770 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee5aa348-6635-441e-99aa-0c4776381dd1" path="/var/lib/kubelet/pods/ee5aa348-6635-441e-99aa-0c4776381dd1/volumes" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.092590 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/681c7580-eb96-4022-9795-cd4306094a03-catalog-content\") pod \"community-operators-qmb5t\" (UID: \"681c7580-eb96-4022-9795-cd4306094a03\") " pod="openshift-marketplace/community-operators-qmb5t" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.092877 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/681c7580-eb96-4022-9795-cd4306094a03-utilities\") pod \"community-operators-qmb5t\" (UID: \"681c7580-eb96-4022-9795-cd4306094a03\") " pod="openshift-marketplace/community-operators-qmb5t" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.096144 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0e7e64e-4982-4725-99c0-e1240d177ea4-serving-cert\") pod \"route-controller-manager-58c9cdf859-lpcsk\" (UID: \"e0e7e64e-4982-4725-99c0-e1240d177ea4\") " pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.097133 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdchf\" (UniqueName: \"kubernetes.io/projected/681c7580-eb96-4022-9795-cd4306094a03-kube-api-access-bdchf\") pod \"community-operators-qmb5t\" (UID: \"681c7580-eb96-4022-9795-cd4306094a03\") " pod="openshift-marketplace/community-operators-qmb5t" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.099699 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg4rm\" (UniqueName: \"kubernetes.io/projected/e0e7e64e-4982-4725-99c0-e1240d177ea4-kube-api-access-tg4rm\") pod \"route-controller-manager-58c9cdf859-lpcsk\" (UID: \"e0e7e64e-4982-4725-99c0-e1240d177ea4\") " pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.131552 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lhq6b"] Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.132781 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhq6b" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.136612 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.138097 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhq6b"] Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.219162 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.251424 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qmb5t" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.282582 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b208e24e-eb1e-4ad3-bb95-5c6ff4581b25-utilities\") pod \"redhat-operators-lhq6b\" (UID: \"b208e24e-eb1e-4ad3-bb95-5c6ff4581b25\") " pod="openshift-marketplace/redhat-operators-lhq6b" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.282645 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzbsw\" (UniqueName: \"kubernetes.io/projected/b208e24e-eb1e-4ad3-bb95-5c6ff4581b25-kube-api-access-qzbsw\") pod \"redhat-operators-lhq6b\" (UID: \"b208e24e-eb1e-4ad3-bb95-5c6ff4581b25\") " pod="openshift-marketplace/redhat-operators-lhq6b" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.282723 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b208e24e-eb1e-4ad3-bb95-5c6ff4581b25-catalog-content\") pod \"redhat-operators-lhq6b\" (UID: \"b208e24e-eb1e-4ad3-bb95-5c6ff4581b25\") " pod="openshift-marketplace/redhat-operators-lhq6b" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.383838 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzbsw\" (UniqueName: \"kubernetes.io/projected/b208e24e-eb1e-4ad3-bb95-5c6ff4581b25-kube-api-access-qzbsw\") pod \"redhat-operators-lhq6b\" (UID: \"b208e24e-eb1e-4ad3-bb95-5c6ff4581b25\") " pod="openshift-marketplace/redhat-operators-lhq6b" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.383903 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b208e24e-eb1e-4ad3-bb95-5c6ff4581b25-catalog-content\") pod \"redhat-operators-lhq6b\" (UID: \"b208e24e-eb1e-4ad3-bb95-5c6ff4581b25\") " pod="openshift-marketplace/redhat-operators-lhq6b" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.383953 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b208e24e-eb1e-4ad3-bb95-5c6ff4581b25-utilities\") pod \"redhat-operators-lhq6b\" (UID: \"b208e24e-eb1e-4ad3-bb95-5c6ff4581b25\") " pod="openshift-marketplace/redhat-operators-lhq6b" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.384375 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b208e24e-eb1e-4ad3-bb95-5c6ff4581b25-utilities\") pod \"redhat-operators-lhq6b\" (UID: \"b208e24e-eb1e-4ad3-bb95-5c6ff4581b25\") " pod="openshift-marketplace/redhat-operators-lhq6b" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.384476 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b208e24e-eb1e-4ad3-bb95-5c6ff4581b25-catalog-content\") pod \"redhat-operators-lhq6b\" (UID: \"b208e24e-eb1e-4ad3-bb95-5c6ff4581b25\") " pod="openshift-marketplace/redhat-operators-lhq6b" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.404842 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzbsw\" (UniqueName: \"kubernetes.io/projected/b208e24e-eb1e-4ad3-bb95-5c6ff4581b25-kube-api-access-qzbsw\") pod \"redhat-operators-lhq6b\" (UID: \"b208e24e-eb1e-4ad3-bb95-5c6ff4581b25\") " pod="openshift-marketplace/redhat-operators-lhq6b" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.455211 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhq6b" Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.665146 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qmb5t"] Jan 27 12:19:40 crc kubenswrapper[4745]: W0127 12:19:40.672481 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod681c7580_eb96_4022_9795_cd4306094a03.slice/crio-21128cd7b2cdd32f04334edcda7d241fd1a24fd70b12d1d1445a5bf385932e57 WatchSource:0}: Error finding container 21128cd7b2cdd32f04334edcda7d241fd1a24fd70b12d1d1445a5bf385932e57: Status 404 returned error can't find the container with id 21128cd7b2cdd32f04334edcda7d241fd1a24fd70b12d1d1445a5bf385932e57 Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.678241 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk"] Jan 27 12:19:40 crc kubenswrapper[4745]: W0127 12:19:40.684386 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0e7e64e_4982_4725_99c0_e1240d177ea4.slice/crio-d774fc8469b0728cb64c2693438540573bb0d9344eef3007affd59c7ef124d6d WatchSource:0}: Error finding container d774fc8469b0728cb64c2693438540573bb0d9344eef3007affd59c7ef124d6d: Status 404 returned error can't find the container with id d774fc8469b0728cb64c2693438540573bb0d9344eef3007affd59c7ef124d6d Jan 27 12:19:40 crc kubenswrapper[4745]: I0127 12:19:40.869456 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhq6b"] Jan 27 12:19:40 crc kubenswrapper[4745]: W0127 12:19:40.875908 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb208e24e_eb1e_4ad3_bb95_5c6ff4581b25.slice/crio-7c681e923bf59d88319e4abfa0b344e7ca043d413c7f87342fddb8c7666d024e WatchSource:0}: Error finding container 7c681e923bf59d88319e4abfa0b344e7ca043d413c7f87342fddb8c7666d024e: Status 404 returned error can't find the container with id 7c681e923bf59d88319e4abfa0b344e7ca043d413c7f87342fddb8c7666d024e Jan 27 12:19:41 crc kubenswrapper[4745]: I0127 12:19:41.000735 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" event={"ID":"e0e7e64e-4982-4725-99c0-e1240d177ea4","Type":"ContainerStarted","Data":"d774fc8469b0728cb64c2693438540573bb0d9344eef3007affd59c7ef124d6d"} Jan 27 12:19:41 crc kubenswrapper[4745]: I0127 12:19:41.001855 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhq6b" event={"ID":"b208e24e-eb1e-4ad3-bb95-5c6ff4581b25","Type":"ContainerStarted","Data":"7c681e923bf59d88319e4abfa0b344e7ca043d413c7f87342fddb8c7666d024e"} Jan 27 12:19:41 crc kubenswrapper[4745]: I0127 12:19:41.002874 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmb5t" event={"ID":"681c7580-eb96-4022-9795-cd4306094a03","Type":"ContainerStarted","Data":"21128cd7b2cdd32f04334edcda7d241fd1a24fd70b12d1d1445a5bf385932e57"} Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.010084 4745 generic.go:334] "Generic (PLEG): container finished" podID="b208e24e-eb1e-4ad3-bb95-5c6ff4581b25" containerID="ad156e6ba4e5e5cf0bbaa00eb309c904a59580c2096e598d7d1f218c04c40cd5" exitCode=0 Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.010353 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhq6b" event={"ID":"b208e24e-eb1e-4ad3-bb95-5c6ff4581b25","Type":"ContainerDied","Data":"ad156e6ba4e5e5cf0bbaa00eb309c904a59580c2096e598d7d1f218c04c40cd5"} Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.012896 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.013393 4745 generic.go:334] "Generic (PLEG): container finished" podID="502a401d-0f57-4a44-a241-a6150f1e3c48" containerID="e2efb18d63024acb1b5310a1a5b60854b7ffa5b7e4b14eb5090ae8c1b47a3a54" exitCode=0 Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.013468 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sp442" event={"ID":"502a401d-0f57-4a44-a241-a6150f1e3c48","Type":"ContainerDied","Data":"e2efb18d63024acb1b5310a1a5b60854b7ffa5b7e4b14eb5090ae8c1b47a3a54"} Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.016085 4745 generic.go:334] "Generic (PLEG): container finished" podID="681c7580-eb96-4022-9795-cd4306094a03" containerID="d1a32fdbc7b5ede2c72e419e27c45bdb21e4fff647bef242b9afbe93a77babe2" exitCode=0 Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.016138 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmb5t" event={"ID":"681c7580-eb96-4022-9795-cd4306094a03","Type":"ContainerDied","Data":"d1a32fdbc7b5ede2c72e419e27c45bdb21e4fff647bef242b9afbe93a77babe2"} Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.021394 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" event={"ID":"e0e7e64e-4982-4725-99c0-e1240d177ea4","Type":"ContainerStarted","Data":"938395063dab5a3d086d04cc6e5c5ac21b8d77a406f16d79234e7ba18ea78440"} Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.021466 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.025679 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.078744 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" podStartSLOduration=6.078722041 podStartE2EDuration="6.078722041s" podCreationTimestamp="2026-01-27 12:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:19:42.076880228 +0000 UTC m=+474.881790916" watchObservedRunningTime="2026-01-27 12:19:42.078722041 +0000 UTC m=+474.883632729" Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.331440 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nj9xw"] Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.333102 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nj9xw" Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.335374 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.343732 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nj9xw"] Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.406799 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f1cace8-3efb-4577-864e-6eb4c1ff4b6e-utilities\") pod \"certified-operators-nj9xw\" (UID: \"6f1cace8-3efb-4577-864e-6eb4c1ff4b6e\") " pod="openshift-marketplace/certified-operators-nj9xw" Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.406948 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnp8r\" (UniqueName: \"kubernetes.io/projected/6f1cace8-3efb-4577-864e-6eb4c1ff4b6e-kube-api-access-nnp8r\") pod \"certified-operators-nj9xw\" (UID: \"6f1cace8-3efb-4577-864e-6eb4c1ff4b6e\") " pod="openshift-marketplace/certified-operators-nj9xw" Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.407203 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f1cace8-3efb-4577-864e-6eb4c1ff4b6e-catalog-content\") pod \"certified-operators-nj9xw\" (UID: \"6f1cace8-3efb-4577-864e-6eb4c1ff4b6e\") " pod="openshift-marketplace/certified-operators-nj9xw" Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.508470 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f1cace8-3efb-4577-864e-6eb4c1ff4b6e-utilities\") pod \"certified-operators-nj9xw\" (UID: \"6f1cace8-3efb-4577-864e-6eb4c1ff4b6e\") " pod="openshift-marketplace/certified-operators-nj9xw" Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.508553 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnp8r\" (UniqueName: \"kubernetes.io/projected/6f1cace8-3efb-4577-864e-6eb4c1ff4b6e-kube-api-access-nnp8r\") pod \"certified-operators-nj9xw\" (UID: \"6f1cace8-3efb-4577-864e-6eb4c1ff4b6e\") " pod="openshift-marketplace/certified-operators-nj9xw" Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.508610 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f1cace8-3efb-4577-864e-6eb4c1ff4b6e-catalog-content\") pod \"certified-operators-nj9xw\" (UID: \"6f1cace8-3efb-4577-864e-6eb4c1ff4b6e\") " pod="openshift-marketplace/certified-operators-nj9xw" Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.509306 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f1cace8-3efb-4577-864e-6eb4c1ff4b6e-catalog-content\") pod \"certified-operators-nj9xw\" (UID: \"6f1cace8-3efb-4577-864e-6eb4c1ff4b6e\") " pod="openshift-marketplace/certified-operators-nj9xw" Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.509345 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f1cace8-3efb-4577-864e-6eb4c1ff4b6e-utilities\") pod \"certified-operators-nj9xw\" (UID: \"6f1cace8-3efb-4577-864e-6eb4c1ff4b6e\") " pod="openshift-marketplace/certified-operators-nj9xw" Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.528683 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnp8r\" (UniqueName: \"kubernetes.io/projected/6f1cace8-3efb-4577-864e-6eb4c1ff4b6e-kube-api-access-nnp8r\") pod \"certified-operators-nj9xw\" (UID: \"6f1cace8-3efb-4577-864e-6eb4c1ff4b6e\") " pod="openshift-marketplace/certified-operators-nj9xw" Jan 27 12:19:42 crc kubenswrapper[4745]: I0127 12:19:42.649592 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nj9xw" Jan 27 12:19:43 crc kubenswrapper[4745]: I0127 12:19:43.067379 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nj9xw"] Jan 27 12:19:44 crc kubenswrapper[4745]: I0127 12:19:44.030799 4745 generic.go:334] "Generic (PLEG): container finished" podID="681c7580-eb96-4022-9795-cd4306094a03" containerID="f896cf49cf7ad000df86b214d82a5d6e27f7ee5ee51f975ce851507b1b49faf7" exitCode=0 Jan 27 12:19:44 crc kubenswrapper[4745]: I0127 12:19:44.030854 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmb5t" event={"ID":"681c7580-eb96-4022-9795-cd4306094a03","Type":"ContainerDied","Data":"f896cf49cf7ad000df86b214d82a5d6e27f7ee5ee51f975ce851507b1b49faf7"} Jan 27 12:19:44 crc kubenswrapper[4745]: I0127 12:19:44.032912 4745 generic.go:334] "Generic (PLEG): container finished" podID="6f1cace8-3efb-4577-864e-6eb4c1ff4b6e" containerID="296040eeca25458837fd5d302d79dc58cee25991c9a4cc606da4a4e997a19fd6" exitCode=0 Jan 27 12:19:44 crc kubenswrapper[4745]: I0127 12:19:44.032994 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nj9xw" event={"ID":"6f1cace8-3efb-4577-864e-6eb4c1ff4b6e","Type":"ContainerDied","Data":"296040eeca25458837fd5d302d79dc58cee25991c9a4cc606da4a4e997a19fd6"} Jan 27 12:19:44 crc kubenswrapper[4745]: I0127 12:19:44.033018 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nj9xw" event={"ID":"6f1cace8-3efb-4577-864e-6eb4c1ff4b6e","Type":"ContainerStarted","Data":"85661e8ee331aaba8140dd5b31ea02a40e6c1a4ab4489f6cc8359c1dea15cafa"} Jan 27 12:19:44 crc kubenswrapper[4745]: I0127 12:19:44.035642 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhq6b" event={"ID":"b208e24e-eb1e-4ad3-bb95-5c6ff4581b25","Type":"ContainerStarted","Data":"8ed895615284b97a567b69bc7ba631215680d97a9a70e49e545f97b07d39f80a"} Jan 27 12:19:44 crc kubenswrapper[4745]: I0127 12:19:44.046710 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sp442" event={"ID":"502a401d-0f57-4a44-a241-a6150f1e3c48","Type":"ContainerStarted","Data":"d0d3b91e4bee82c4deb9fd252ad21a2706b755b3a9f1700f2ac04a81a3c0c883"} Jan 27 12:19:44 crc kubenswrapper[4745]: I0127 12:19:44.094415 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sp442" podStartSLOduration=2.994957232 podStartE2EDuration="7.094397738s" podCreationTimestamp="2026-01-27 12:19:37 +0000 UTC" firstStartedPulling="2026-01-27 12:19:38.973689873 +0000 UTC m=+471.778600561" lastFinishedPulling="2026-01-27 12:19:43.073130379 +0000 UTC m=+475.878041067" observedRunningTime="2026-01-27 12:19:44.077848887 +0000 UTC m=+476.882759575" watchObservedRunningTime="2026-01-27 12:19:44.094397738 +0000 UTC m=+476.899308426" Jan 27 12:19:45 crc kubenswrapper[4745]: I0127 12:19:45.059041 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmb5t" event={"ID":"681c7580-eb96-4022-9795-cd4306094a03","Type":"ContainerStarted","Data":"78ceea079a4066f905659b9abd0bafd4d19b1cb31ff8d4b15cb1fd67839c0220"} Jan 27 12:19:45 crc kubenswrapper[4745]: I0127 12:19:45.063258 4745 generic.go:334] "Generic (PLEG): container finished" podID="b208e24e-eb1e-4ad3-bb95-5c6ff4581b25" containerID="8ed895615284b97a567b69bc7ba631215680d97a9a70e49e545f97b07d39f80a" exitCode=0 Jan 27 12:19:45 crc kubenswrapper[4745]: I0127 12:19:45.063428 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhq6b" event={"ID":"b208e24e-eb1e-4ad3-bb95-5c6ff4581b25","Type":"ContainerDied","Data":"8ed895615284b97a567b69bc7ba631215680d97a9a70e49e545f97b07d39f80a"} Jan 27 12:19:45 crc kubenswrapper[4745]: I0127 12:19:45.077393 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qmb5t" podStartSLOduration=3.393171086 podStartE2EDuration="6.077377104s" podCreationTimestamp="2026-01-27 12:19:39 +0000 UTC" firstStartedPulling="2026-01-27 12:19:42.017401298 +0000 UTC m=+474.822311986" lastFinishedPulling="2026-01-27 12:19:44.701607316 +0000 UTC m=+477.506518004" observedRunningTime="2026-01-27 12:19:45.076712355 +0000 UTC m=+477.881623053" watchObservedRunningTime="2026-01-27 12:19:45.077377104 +0000 UTC m=+477.882287792" Jan 27 12:19:46 crc kubenswrapper[4745]: I0127 12:19:46.070226 4745 generic.go:334] "Generic (PLEG): container finished" podID="6f1cace8-3efb-4577-864e-6eb4c1ff4b6e" containerID="a0b8a5fe0ab45554c7d201de9fc3ff29552da0300a2da4e5fc8ed8f8312cd824" exitCode=0 Jan 27 12:19:46 crc kubenswrapper[4745]: I0127 12:19:46.070497 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nj9xw" event={"ID":"6f1cace8-3efb-4577-864e-6eb4c1ff4b6e","Type":"ContainerDied","Data":"a0b8a5fe0ab45554c7d201de9fc3ff29552da0300a2da4e5fc8ed8f8312cd824"} Jan 27 12:19:47 crc kubenswrapper[4745]: I0127 12:19:47.081451 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhq6b" event={"ID":"b208e24e-eb1e-4ad3-bb95-5c6ff4581b25","Type":"ContainerStarted","Data":"edb3512d2ea934ef1da09b6c0cc6239b5d3e70107e8fab85306f37f37ba6da60"} Jan 27 12:19:47 crc kubenswrapper[4745]: I0127 12:19:47.111127 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lhq6b" podStartSLOduration=3.200813802 podStartE2EDuration="7.111095876s" podCreationTimestamp="2026-01-27 12:19:40 +0000 UTC" firstStartedPulling="2026-01-27 12:19:42.012636439 +0000 UTC m=+474.817547127" lastFinishedPulling="2026-01-27 12:19:45.922918523 +0000 UTC m=+478.727829201" observedRunningTime="2026-01-27 12:19:47.10674204 +0000 UTC m=+479.911652728" watchObservedRunningTime="2026-01-27 12:19:47.111095876 +0000 UTC m=+479.916006574" Jan 27 12:19:48 crc kubenswrapper[4745]: I0127 12:19:48.041732 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sp442" Jan 27 12:19:48 crc kubenswrapper[4745]: I0127 12:19:48.042127 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sp442" Jan 27 12:19:48 crc kubenswrapper[4745]: I0127 12:19:48.087113 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sp442" Jan 27 12:19:48 crc kubenswrapper[4745]: I0127 12:19:48.134124 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sp442" Jan 27 12:19:49 crc kubenswrapper[4745]: I0127 12:19:49.102003 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nj9xw" event={"ID":"6f1cace8-3efb-4577-864e-6eb4c1ff4b6e","Type":"ContainerStarted","Data":"96e853dec5cb3a7138524330f9eace54917fa746b6e21dc0774bc84cdf22cfec"} Jan 27 12:19:49 crc kubenswrapper[4745]: I0127 12:19:49.121549 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nj9xw" podStartSLOduration=4.015906408 podStartE2EDuration="7.121531912s" podCreationTimestamp="2026-01-27 12:19:42 +0000 UTC" firstStartedPulling="2026-01-27 12:19:44.034666011 +0000 UTC m=+476.839576699" lastFinishedPulling="2026-01-27 12:19:47.140291495 +0000 UTC m=+479.945202203" observedRunningTime="2026-01-27 12:19:49.11941487 +0000 UTC m=+481.924325568" watchObservedRunningTime="2026-01-27 12:19:49.121531912 +0000 UTC m=+481.926442600" Jan 27 12:19:50 crc kubenswrapper[4745]: I0127 12:19:50.254075 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qmb5t" Jan 27 12:19:50 crc kubenswrapper[4745]: I0127 12:19:50.254164 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qmb5t" Jan 27 12:19:50 crc kubenswrapper[4745]: I0127 12:19:50.405307 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qmb5t" Jan 27 12:19:50 crc kubenswrapper[4745]: I0127 12:19:50.456494 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lhq6b" Jan 27 12:19:50 crc kubenswrapper[4745]: I0127 12:19:50.456600 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lhq6b" Jan 27 12:19:51 crc kubenswrapper[4745]: I0127 12:19:51.151445 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qmb5t" Jan 27 12:19:51 crc kubenswrapper[4745]: I0127 12:19:51.510095 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lhq6b" podUID="b208e24e-eb1e-4ad3-bb95-5c6ff4581b25" containerName="registry-server" probeResult="failure" output=< Jan 27 12:19:51 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 27 12:19:51 crc kubenswrapper[4745]: > Jan 27 12:19:52 crc kubenswrapper[4745]: I0127 12:19:52.650463 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nj9xw" Jan 27 12:19:52 crc kubenswrapper[4745]: I0127 12:19:52.650901 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nj9xw" Jan 27 12:19:52 crc kubenswrapper[4745]: I0127 12:19:52.688850 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nj9xw" Jan 27 12:19:53 crc kubenswrapper[4745]: I0127 12:19:53.186183 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nj9xw" Jan 27 12:19:56 crc kubenswrapper[4745]: I0127 12:19:56.269295 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-756c9cd684-rr9l6"] Jan 27 12:19:56 crc kubenswrapper[4745]: I0127 12:19:56.269762 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" podUID="b96cf09a-c9fc-4831-ac5f-6f41fc66b348" containerName="controller-manager" containerID="cri-o://8c946c80ddadfa1c9cad62e31f4e1f8669e91ab8509d759d47b7f4e2c87aec49" gracePeriod=30 Jan 27 12:19:56 crc kubenswrapper[4745]: I0127 12:19:56.346472 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk"] Jan 27 12:19:56 crc kubenswrapper[4745]: I0127 12:19:56.346681 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" podUID="e0e7e64e-4982-4725-99c0-e1240d177ea4" containerName="route-controller-manager" containerID="cri-o://938395063dab5a3d086d04cc6e5c5ac21b8d77a406f16d79234e7ba18ea78440" gracePeriod=30 Jan 27 12:19:58 crc kubenswrapper[4745]: I0127 12:19:58.148635 4745 generic.go:334] "Generic (PLEG): container finished" podID="b96cf09a-c9fc-4831-ac5f-6f41fc66b348" containerID="8c946c80ddadfa1c9cad62e31f4e1f8669e91ab8509d759d47b7f4e2c87aec49" exitCode=0 Jan 27 12:19:58 crc kubenswrapper[4745]: I0127 12:19:58.148696 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" event={"ID":"b96cf09a-c9fc-4831-ac5f-6f41fc66b348","Type":"ContainerDied","Data":"8c946c80ddadfa1c9cad62e31f4e1f8669e91ab8509d759d47b7f4e2c87aec49"} Jan 27 12:19:58 crc kubenswrapper[4745]: I0127 12:19:58.255452 4745 patch_prober.go:28] interesting pod/controller-manager-756c9cd684-rr9l6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" start-of-body= Jan 27 12:19:58 crc kubenswrapper[4745]: I0127 12:19:58.255534 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" podUID="b96cf09a-c9fc-4831-ac5f-6f41fc66b348" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" Jan 27 12:20:00 crc kubenswrapper[4745]: I0127 12:20:00.160933 4745 generic.go:334] "Generic (PLEG): container finished" podID="e0e7e64e-4982-4725-99c0-e1240d177ea4" containerID="938395063dab5a3d086d04cc6e5c5ac21b8d77a406f16d79234e7ba18ea78440" exitCode=0 Jan 27 12:20:00 crc kubenswrapper[4745]: I0127 12:20:00.161032 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" event={"ID":"e0e7e64e-4982-4725-99c0-e1240d177ea4","Type":"ContainerDied","Data":"938395063dab5a3d086d04cc6e5c5ac21b8d77a406f16d79234e7ba18ea78440"} Jan 27 12:20:00 crc kubenswrapper[4745]: I0127 12:20:00.236250 4745 patch_prober.go:28] interesting pod/route-controller-manager-58c9cdf859-lpcsk container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.75:8443/healthz\": dial tcp 10.217.0.75:8443: connect: connection refused" start-of-body= Jan 27 12:20:00 crc kubenswrapper[4745]: I0127 12:20:00.236322 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" podUID="e0e7e64e-4982-4725-99c0-e1240d177ea4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.75:8443/healthz\": dial tcp 10.217.0.75:8443: connect: connection refused" Jan 27 12:20:00 crc kubenswrapper[4745]: I0127 12:20:00.498135 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lhq6b" Jan 27 12:20:00 crc kubenswrapper[4745]: I0127 12:20:00.589542 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lhq6b" Jan 27 12:20:00 crc kubenswrapper[4745]: I0127 12:20:00.931700 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" Jan 27 12:20:00 crc kubenswrapper[4745]: I0127 12:20:00.935613 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" Jan 27 12:20:00 crc kubenswrapper[4745]: I0127 12:20:00.962400 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2"] Jan 27 12:20:00 crc kubenswrapper[4745]: E0127 12:20:00.962609 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e7e64e-4982-4725-99c0-e1240d177ea4" containerName="route-controller-manager" Jan 27 12:20:00 crc kubenswrapper[4745]: I0127 12:20:00.962621 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e7e64e-4982-4725-99c0-e1240d177ea4" containerName="route-controller-manager" Jan 27 12:20:00 crc kubenswrapper[4745]: E0127 12:20:00.962634 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b96cf09a-c9fc-4831-ac5f-6f41fc66b348" containerName="controller-manager" Jan 27 12:20:00 crc kubenswrapper[4745]: I0127 12:20:00.962640 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96cf09a-c9fc-4831-ac5f-6f41fc66b348" containerName="controller-manager" Jan 27 12:20:00 crc kubenswrapper[4745]: I0127 12:20:00.962744 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0e7e64e-4982-4725-99c0-e1240d177ea4" containerName="route-controller-manager" Jan 27 12:20:00 crc kubenswrapper[4745]: I0127 12:20:00.962756 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="b96cf09a-c9fc-4831-ac5f-6f41fc66b348" containerName="controller-manager" Jan 27 12:20:00 crc kubenswrapper[4745]: I0127 12:20:00.963113 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2" Jan 27 12:20:00 crc kubenswrapper[4745]: I0127 12:20:00.983330 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2"] Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.027484 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-proxy-ca-bundles\") pod \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\" (UID: \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\") " Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.027591 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mznrt\" (UniqueName: \"kubernetes.io/projected/42f905fa-4050-42a6-a9dc-e1280e639be9-kube-api-access-mznrt\") pod \"route-controller-manager-5bfb445888-jg2p2\" (UID: \"42f905fa-4050-42a6-a9dc-e1280e639be9\") " pod="openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.027617 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42f905fa-4050-42a6-a9dc-e1280e639be9-client-ca\") pod \"route-controller-manager-5bfb445888-jg2p2\" (UID: \"42f905fa-4050-42a6-a9dc-e1280e639be9\") " pod="openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.027642 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42f905fa-4050-42a6-a9dc-e1280e639be9-serving-cert\") pod \"route-controller-manager-5bfb445888-jg2p2\" (UID: \"42f905fa-4050-42a6-a9dc-e1280e639be9\") " pod="openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.027707 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42f905fa-4050-42a6-a9dc-e1280e639be9-config\") pod \"route-controller-manager-5bfb445888-jg2p2\" (UID: \"42f905fa-4050-42a6-a9dc-e1280e639be9\") " pod="openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.028643 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b96cf09a-c9fc-4831-ac5f-6f41fc66b348" (UID: "b96cf09a-c9fc-4831-ac5f-6f41fc66b348"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.131600 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-serving-cert\") pod \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\" (UID: \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\") " Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.131660 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0e7e64e-4982-4725-99c0-e1240d177ea4-client-ca\") pod \"e0e7e64e-4982-4725-99c0-e1240d177ea4\" (UID: \"e0e7e64e-4982-4725-99c0-e1240d177ea4\") " Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.131719 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg4rm\" (UniqueName: \"kubernetes.io/projected/e0e7e64e-4982-4725-99c0-e1240d177ea4-kube-api-access-tg4rm\") pod \"e0e7e64e-4982-4725-99c0-e1240d177ea4\" (UID: \"e0e7e64e-4982-4725-99c0-e1240d177ea4\") " Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.131750 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6lx9\" (UniqueName: \"kubernetes.io/projected/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-kube-api-access-d6lx9\") pod \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\" (UID: \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\") " Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.131777 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0e7e64e-4982-4725-99c0-e1240d177ea4-serving-cert\") pod \"e0e7e64e-4982-4725-99c0-e1240d177ea4\" (UID: \"e0e7e64e-4982-4725-99c0-e1240d177ea4\") " Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.131855 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-config\") pod \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\" (UID: \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\") " Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.131914 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-client-ca\") pod \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\" (UID: \"b96cf09a-c9fc-4831-ac5f-6f41fc66b348\") " Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.131936 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0e7e64e-4982-4725-99c0-e1240d177ea4-config\") pod \"e0e7e64e-4982-4725-99c0-e1240d177ea4\" (UID: \"e0e7e64e-4982-4725-99c0-e1240d177ea4\") " Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.132034 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mznrt\" (UniqueName: \"kubernetes.io/projected/42f905fa-4050-42a6-a9dc-e1280e639be9-kube-api-access-mznrt\") pod \"route-controller-manager-5bfb445888-jg2p2\" (UID: \"42f905fa-4050-42a6-a9dc-e1280e639be9\") " pod="openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.132060 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42f905fa-4050-42a6-a9dc-e1280e639be9-client-ca\") pod \"route-controller-manager-5bfb445888-jg2p2\" (UID: \"42f905fa-4050-42a6-a9dc-e1280e639be9\") " pod="openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.132094 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42f905fa-4050-42a6-a9dc-e1280e639be9-serving-cert\") pod \"route-controller-manager-5bfb445888-jg2p2\" (UID: \"42f905fa-4050-42a6-a9dc-e1280e639be9\") " pod="openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.132174 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42f905fa-4050-42a6-a9dc-e1280e639be9-config\") pod \"route-controller-manager-5bfb445888-jg2p2\" (UID: \"42f905fa-4050-42a6-a9dc-e1280e639be9\") " pod="openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.132242 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.132708 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0e7e64e-4982-4725-99c0-e1240d177ea4-client-ca" (OuterVolumeSpecName: "client-ca") pod "e0e7e64e-4982-4725-99c0-e1240d177ea4" (UID: "e0e7e64e-4982-4725-99c0-e1240d177ea4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.133463 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0e7e64e-4982-4725-99c0-e1240d177ea4-config" (OuterVolumeSpecName: "config") pod "e0e7e64e-4982-4725-99c0-e1240d177ea4" (UID: "e0e7e64e-4982-4725-99c0-e1240d177ea4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.133685 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42f905fa-4050-42a6-a9dc-e1280e639be9-config\") pod \"route-controller-manager-5bfb445888-jg2p2\" (UID: \"42f905fa-4050-42a6-a9dc-e1280e639be9\") " pod="openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.134364 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42f905fa-4050-42a6-a9dc-e1280e639be9-client-ca\") pod \"route-controller-manager-5bfb445888-jg2p2\" (UID: \"42f905fa-4050-42a6-a9dc-e1280e639be9\") " pod="openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.134600 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-client-ca" (OuterVolumeSpecName: "client-ca") pod "b96cf09a-c9fc-4831-ac5f-6f41fc66b348" (UID: "b96cf09a-c9fc-4831-ac5f-6f41fc66b348"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.135130 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-config" (OuterVolumeSpecName: "config") pod "b96cf09a-c9fc-4831-ac5f-6f41fc66b348" (UID: "b96cf09a-c9fc-4831-ac5f-6f41fc66b348"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.138656 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42f905fa-4050-42a6-a9dc-e1280e639be9-serving-cert\") pod \"route-controller-manager-5bfb445888-jg2p2\" (UID: \"42f905fa-4050-42a6-a9dc-e1280e639be9\") " pod="openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.139387 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0e7e64e-4982-4725-99c0-e1240d177ea4-kube-api-access-tg4rm" (OuterVolumeSpecName: "kube-api-access-tg4rm") pod "e0e7e64e-4982-4725-99c0-e1240d177ea4" (UID: "e0e7e64e-4982-4725-99c0-e1240d177ea4"). InnerVolumeSpecName "kube-api-access-tg4rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.139750 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-kube-api-access-d6lx9" (OuterVolumeSpecName: "kube-api-access-d6lx9") pod "b96cf09a-c9fc-4831-ac5f-6f41fc66b348" (UID: "b96cf09a-c9fc-4831-ac5f-6f41fc66b348"). InnerVolumeSpecName "kube-api-access-d6lx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.140369 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e7e64e-4982-4725-99c0-e1240d177ea4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e0e7e64e-4982-4725-99c0-e1240d177ea4" (UID: "e0e7e64e-4982-4725-99c0-e1240d177ea4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.140915 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b96cf09a-c9fc-4831-ac5f-6f41fc66b348" (UID: "b96cf09a-c9fc-4831-ac5f-6f41fc66b348"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.162787 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mznrt\" (UniqueName: \"kubernetes.io/projected/42f905fa-4050-42a6-a9dc-e1280e639be9-kube-api-access-mznrt\") pod \"route-controller-manager-5bfb445888-jg2p2\" (UID: \"42f905fa-4050-42a6-a9dc-e1280e639be9\") " pod="openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.169568 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" event={"ID":"e0e7e64e-4982-4725-99c0-e1240d177ea4","Type":"ContainerDied","Data":"d774fc8469b0728cb64c2693438540573bb0d9344eef3007affd59c7ef124d6d"} Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.169614 4745 scope.go:117] "RemoveContainer" containerID="938395063dab5a3d086d04cc6e5c5ac21b8d77a406f16d79234e7ba18ea78440" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.169723 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.191550 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.191545 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-756c9cd684-rr9l6" event={"ID":"b96cf09a-c9fc-4831-ac5f-6f41fc66b348","Type":"ContainerDied","Data":"74b30ec62584c6cb87096eb60625551bc78ed234878c99dac811e34b0fcb375b"} Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.208734 4745 scope.go:117] "RemoveContainer" containerID="8c946c80ddadfa1c9cad62e31f4e1f8669e91ab8509d759d47b7f4e2c87aec49" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.222115 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-756c9cd684-rr9l6"] Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.232761 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-756c9cd684-rr9l6"] Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.233890 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg4rm\" (UniqueName: \"kubernetes.io/projected/e0e7e64e-4982-4725-99c0-e1240d177ea4-kube-api-access-tg4rm\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.233927 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6lx9\" (UniqueName: \"kubernetes.io/projected/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-kube-api-access-d6lx9\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.233978 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0e7e64e-4982-4725-99c0-e1240d177ea4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.233989 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.233998 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.234008 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0e7e64e-4982-4725-99c0-e1240d177ea4-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.234017 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b96cf09a-c9fc-4831-ac5f-6f41fc66b348-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.234034 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0e7e64e-4982-4725-99c0-e1240d177ea4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.237059 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk"] Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.240640 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c9cdf859-lpcsk"] Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.295464 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2" Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.689088 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2"] Jan 27 12:20:01 crc kubenswrapper[4745]: W0127 12:20:01.709626 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42f905fa_4050_42a6_a9dc_e1280e639be9.slice/crio-a85ea42f4bc92818317ffa5d40318e0163aee558b55802a717c6f98bc00d14a6 WatchSource:0}: Error finding container a85ea42f4bc92818317ffa5d40318e0163aee558b55802a717c6f98bc00d14a6: Status 404 returned error can't find the container with id a85ea42f4bc92818317ffa5d40318e0163aee558b55802a717c6f98bc00d14a6 Jan 27 12:20:01 crc kubenswrapper[4745]: I0127 12:20:01.719041 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" podUID="8481d31f-f701-4821-9893-5ebf45d2dcb8" containerName="oauth-openshift" containerID="cri-o://81131ad077bdba0a5434f9f7d53e0f53c7a4a0eadc71c1f4aeb1f95e403b187c" gracePeriod=15 Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.080706 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b96cf09a-c9fc-4831-ac5f-6f41fc66b348" path="/var/lib/kubelet/pods/b96cf09a-c9fc-4831-ac5f-6f41fc66b348/volumes" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.081949 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0e7e64e-4982-4725-99c0-e1240d177ea4" path="/var/lib/kubelet/pods/e0e7e64e-4982-4725-99c0-e1240d177ea4/volumes" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.126669 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.199995 4745 generic.go:334] "Generic (PLEG): container finished" podID="8481d31f-f701-4821-9893-5ebf45d2dcb8" containerID="81131ad077bdba0a5434f9f7d53e0f53c7a4a0eadc71c1f4aeb1f95e403b187c" exitCode=0 Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.200042 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" event={"ID":"8481d31f-f701-4821-9893-5ebf45d2dcb8","Type":"ContainerDied","Data":"81131ad077bdba0a5434f9f7d53e0f53c7a4a0eadc71c1f4aeb1f95e403b187c"} Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.200086 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.200110 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7rjtn" event={"ID":"8481d31f-f701-4821-9893-5ebf45d2dcb8","Type":"ContainerDied","Data":"9a9d608edecbd2447e88cb41653f8576f10819ac176b3420633935c74a10f58c"} Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.200135 4745 scope.go:117] "RemoveContainer" containerID="81131ad077bdba0a5434f9f7d53e0f53c7a4a0eadc71c1f4aeb1f95e403b187c" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.212062 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2" event={"ID":"42f905fa-4050-42a6-a9dc-e1280e639be9","Type":"ContainerStarted","Data":"a04d1f887055021f4990da7e5cc5bb416ac423f37c3ef3d8174c872c20ebf155"} Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.212094 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2" event={"ID":"42f905fa-4050-42a6-a9dc-e1280e639be9","Type":"ContainerStarted","Data":"a85ea42f4bc92818317ffa5d40318e0163aee558b55802a717c6f98bc00d14a6"} Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.213147 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.241321 4745 scope.go:117] "RemoveContainer" containerID="81131ad077bdba0a5434f9f7d53e0f53c7a4a0eadc71c1f4aeb1f95e403b187c" Jan 27 12:20:02 crc kubenswrapper[4745]: E0127 12:20:02.243484 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81131ad077bdba0a5434f9f7d53e0f53c7a4a0eadc71c1f4aeb1f95e403b187c\": container with ID starting with 81131ad077bdba0a5434f9f7d53e0f53c7a4a0eadc71c1f4aeb1f95e403b187c not found: ID does not exist" containerID="81131ad077bdba0a5434f9f7d53e0f53c7a4a0eadc71c1f4aeb1f95e403b187c" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.243513 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81131ad077bdba0a5434f9f7d53e0f53c7a4a0eadc71c1f4aeb1f95e403b187c"} err="failed to get container status \"81131ad077bdba0a5434f9f7d53e0f53c7a4a0eadc71c1f4aeb1f95e403b187c\": rpc error: code = NotFound desc = could not find container \"81131ad077bdba0a5434f9f7d53e0f53c7a4a0eadc71c1f4aeb1f95e403b187c\": container with ID starting with 81131ad077bdba0a5434f9f7d53e0f53c7a4a0eadc71c1f4aeb1f95e403b187c not found: ID does not exist" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.247361 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-675gh\" (UniqueName: \"kubernetes.io/projected/8481d31f-f701-4821-9893-5ebf45d2dcb8-kube-api-access-675gh\") pod \"8481d31f-f701-4821-9893-5ebf45d2dcb8\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.247431 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-service-ca\") pod \"8481d31f-f701-4821-9893-5ebf45d2dcb8\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.247468 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-serving-cert\") pod \"8481d31f-f701-4821-9893-5ebf45d2dcb8\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.247495 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-trusted-ca-bundle\") pod \"8481d31f-f701-4821-9893-5ebf45d2dcb8\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.247564 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-template-error\") pod \"8481d31f-f701-4821-9893-5ebf45d2dcb8\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.247582 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-cliconfig\") pod \"8481d31f-f701-4821-9893-5ebf45d2dcb8\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.247608 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8481d31f-f701-4821-9893-5ebf45d2dcb8-audit-dir\") pod \"8481d31f-f701-4821-9893-5ebf45d2dcb8\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.247645 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-template-provider-selection\") pod \"8481d31f-f701-4821-9893-5ebf45d2dcb8\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.247714 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-session\") pod \"8481d31f-f701-4821-9893-5ebf45d2dcb8\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.247755 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-ocp-branding-template\") pod \"8481d31f-f701-4821-9893-5ebf45d2dcb8\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.247776 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-router-certs\") pod \"8481d31f-f701-4821-9893-5ebf45d2dcb8\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.247797 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-audit-policies\") pod \"8481d31f-f701-4821-9893-5ebf45d2dcb8\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.247859 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-idp-0-file-data\") pod \"8481d31f-f701-4821-9893-5ebf45d2dcb8\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.247887 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-template-login\") pod \"8481d31f-f701-4821-9893-5ebf45d2dcb8\" (UID: \"8481d31f-f701-4821-9893-5ebf45d2dcb8\") " Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.250782 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "8481d31f-f701-4821-9893-5ebf45d2dcb8" (UID: "8481d31f-f701-4821-9893-5ebf45d2dcb8"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.251176 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "8481d31f-f701-4821-9893-5ebf45d2dcb8" (UID: "8481d31f-f701-4821-9893-5ebf45d2dcb8"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.253870 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8481d31f-f701-4821-9893-5ebf45d2dcb8-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "8481d31f-f701-4821-9893-5ebf45d2dcb8" (UID: "8481d31f-f701-4821-9893-5ebf45d2dcb8"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.254203 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "8481d31f-f701-4821-9893-5ebf45d2dcb8" (UID: "8481d31f-f701-4821-9893-5ebf45d2dcb8"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.255157 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "8481d31f-f701-4821-9893-5ebf45d2dcb8" (UID: "8481d31f-f701-4821-9893-5ebf45d2dcb8"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.263074 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8481d31f-f701-4821-9893-5ebf45d2dcb8-kube-api-access-675gh" (OuterVolumeSpecName: "kube-api-access-675gh") pod "8481d31f-f701-4821-9893-5ebf45d2dcb8" (UID: "8481d31f-f701-4821-9893-5ebf45d2dcb8"). InnerVolumeSpecName "kube-api-access-675gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.263163 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "8481d31f-f701-4821-9893-5ebf45d2dcb8" (UID: "8481d31f-f701-4821-9893-5ebf45d2dcb8"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.265843 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "8481d31f-f701-4821-9893-5ebf45d2dcb8" (UID: "8481d31f-f701-4821-9893-5ebf45d2dcb8"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.266289 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "8481d31f-f701-4821-9893-5ebf45d2dcb8" (UID: "8481d31f-f701-4821-9893-5ebf45d2dcb8"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.267092 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "8481d31f-f701-4821-9893-5ebf45d2dcb8" (UID: "8481d31f-f701-4821-9893-5ebf45d2dcb8"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.268104 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "8481d31f-f701-4821-9893-5ebf45d2dcb8" (UID: "8481d31f-f701-4821-9893-5ebf45d2dcb8"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.270297 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "8481d31f-f701-4821-9893-5ebf45d2dcb8" (UID: "8481d31f-f701-4821-9893-5ebf45d2dcb8"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.274133 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "8481d31f-f701-4821-9893-5ebf45d2dcb8" (UID: "8481d31f-f701-4821-9893-5ebf45d2dcb8"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.280497 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "8481d31f-f701-4821-9893-5ebf45d2dcb8" (UID: "8481d31f-f701-4821-9893-5ebf45d2dcb8"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.349301 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.349456 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.349501 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-675gh\" (UniqueName: \"kubernetes.io/projected/8481d31f-f701-4821-9893-5ebf45d2dcb8-kube-api-access-675gh\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.349514 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.349531 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.349544 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.349556 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.349567 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.349580 4745 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8481d31f-f701-4821-9893-5ebf45d2dcb8-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.349593 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.349605 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.349617 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.349628 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8481d31f-f701-4821-9893-5ebf45d2dcb8-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.349639 4745 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8481d31f-f701-4821-9893-5ebf45d2dcb8-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.410254 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.432855 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5bfb445888-jg2p2" podStartSLOduration=6.432837827 podStartE2EDuration="6.432837827s" podCreationTimestamp="2026-01-27 12:19:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:20:02.236478736 +0000 UTC m=+495.041389444" watchObservedRunningTime="2026-01-27 12:20:02.432837827 +0000 UTC m=+495.237748515" Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.528478 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7rjtn"] Jan 27 12:20:02 crc kubenswrapper[4745]: I0127 12:20:02.531487 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7rjtn"] Jan 27 12:20:03 crc kubenswrapper[4745]: I0127 12:20:03.933932 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2"] Jan 27 12:20:03 crc kubenswrapper[4745]: E0127 12:20:03.934426 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8481d31f-f701-4821-9893-5ebf45d2dcb8" containerName="oauth-openshift" Jan 27 12:20:03 crc kubenswrapper[4745]: I0127 12:20:03.934442 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="8481d31f-f701-4821-9893-5ebf45d2dcb8" containerName="oauth-openshift" Jan 27 12:20:03 crc kubenswrapper[4745]: I0127 12:20:03.934564 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="8481d31f-f701-4821-9893-5ebf45d2dcb8" containerName="oauth-openshift" Jan 27 12:20:03 crc kubenswrapper[4745]: I0127 12:20:03.934951 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" Jan 27 12:20:03 crc kubenswrapper[4745]: I0127 12:20:03.937194 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 12:20:03 crc kubenswrapper[4745]: I0127 12:20:03.937450 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 12:20:03 crc kubenswrapper[4745]: I0127 12:20:03.937545 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 12:20:03 crc kubenswrapper[4745]: I0127 12:20:03.937732 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 12:20:03 crc kubenswrapper[4745]: I0127 12:20:03.937794 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 12:20:03 crc kubenswrapper[4745]: I0127 12:20:03.938542 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 12:20:03 crc kubenswrapper[4745]: I0127 12:20:03.945188 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 12:20:03 crc kubenswrapper[4745]: I0127 12:20:03.945477 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2"] Jan 27 12:20:04 crc kubenswrapper[4745]: I0127 12:20:04.069167 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a85aa657-c018-41b0-babd-dd8f47616d8e-client-ca\") pod \"controller-manager-7b5ccb76d9-bhzq2\" (UID: \"a85aa657-c018-41b0-babd-dd8f47616d8e\") " pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" Jan 27 12:20:04 crc kubenswrapper[4745]: I0127 12:20:04.069512 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85aa657-c018-41b0-babd-dd8f47616d8e-config\") pod \"controller-manager-7b5ccb76d9-bhzq2\" (UID: \"a85aa657-c018-41b0-babd-dd8f47616d8e\") " pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" Jan 27 12:20:04 crc kubenswrapper[4745]: I0127 12:20:04.069545 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a85aa657-c018-41b0-babd-dd8f47616d8e-proxy-ca-bundles\") pod \"controller-manager-7b5ccb76d9-bhzq2\" (UID: \"a85aa657-c018-41b0-babd-dd8f47616d8e\") " pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" Jan 27 12:20:04 crc kubenswrapper[4745]: I0127 12:20:04.069569 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a85aa657-c018-41b0-babd-dd8f47616d8e-serving-cert\") pod \"controller-manager-7b5ccb76d9-bhzq2\" (UID: \"a85aa657-c018-41b0-babd-dd8f47616d8e\") " pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" Jan 27 12:20:04 crc kubenswrapper[4745]: I0127 12:20:04.069596 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgm5w\" (UniqueName: \"kubernetes.io/projected/a85aa657-c018-41b0-babd-dd8f47616d8e-kube-api-access-zgm5w\") pod \"controller-manager-7b5ccb76d9-bhzq2\" (UID: \"a85aa657-c018-41b0-babd-dd8f47616d8e\") " pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" Jan 27 12:20:04 crc kubenswrapper[4745]: I0127 12:20:04.080082 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8481d31f-f701-4821-9893-5ebf45d2dcb8" path="/var/lib/kubelet/pods/8481d31f-f701-4821-9893-5ebf45d2dcb8/volumes" Jan 27 12:20:04 crc kubenswrapper[4745]: I0127 12:20:04.170201 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a85aa657-c018-41b0-babd-dd8f47616d8e-client-ca\") pod \"controller-manager-7b5ccb76d9-bhzq2\" (UID: \"a85aa657-c018-41b0-babd-dd8f47616d8e\") " pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" Jan 27 12:20:04 crc kubenswrapper[4745]: I0127 12:20:04.170439 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85aa657-c018-41b0-babd-dd8f47616d8e-config\") pod \"controller-manager-7b5ccb76d9-bhzq2\" (UID: \"a85aa657-c018-41b0-babd-dd8f47616d8e\") " pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" Jan 27 12:20:04 crc kubenswrapper[4745]: I0127 12:20:04.170493 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a85aa657-c018-41b0-babd-dd8f47616d8e-proxy-ca-bundles\") pod \"controller-manager-7b5ccb76d9-bhzq2\" (UID: \"a85aa657-c018-41b0-babd-dd8f47616d8e\") " pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" Jan 27 12:20:04 crc kubenswrapper[4745]: I0127 12:20:04.170513 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a85aa657-c018-41b0-babd-dd8f47616d8e-serving-cert\") pod \"controller-manager-7b5ccb76d9-bhzq2\" (UID: \"a85aa657-c018-41b0-babd-dd8f47616d8e\") " pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" Jan 27 12:20:04 crc kubenswrapper[4745]: I0127 12:20:04.170533 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgm5w\" (UniqueName: \"kubernetes.io/projected/a85aa657-c018-41b0-babd-dd8f47616d8e-kube-api-access-zgm5w\") pod \"controller-manager-7b5ccb76d9-bhzq2\" (UID: \"a85aa657-c018-41b0-babd-dd8f47616d8e\") " pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" Jan 27 12:20:04 crc kubenswrapper[4745]: I0127 12:20:04.170969 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a85aa657-c018-41b0-babd-dd8f47616d8e-client-ca\") pod \"controller-manager-7b5ccb76d9-bhzq2\" (UID: \"a85aa657-c018-41b0-babd-dd8f47616d8e\") " pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" Jan 27 12:20:04 crc kubenswrapper[4745]: I0127 12:20:04.172462 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a85aa657-c018-41b0-babd-dd8f47616d8e-proxy-ca-bundles\") pod \"controller-manager-7b5ccb76d9-bhzq2\" (UID: \"a85aa657-c018-41b0-babd-dd8f47616d8e\") " pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" Jan 27 12:20:04 crc kubenswrapper[4745]: I0127 12:20:04.172572 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85aa657-c018-41b0-babd-dd8f47616d8e-config\") pod \"controller-manager-7b5ccb76d9-bhzq2\" (UID: \"a85aa657-c018-41b0-babd-dd8f47616d8e\") " pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" Jan 27 12:20:04 crc kubenswrapper[4745]: I0127 12:20:04.177099 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a85aa657-c018-41b0-babd-dd8f47616d8e-serving-cert\") pod \"controller-manager-7b5ccb76d9-bhzq2\" (UID: \"a85aa657-c018-41b0-babd-dd8f47616d8e\") " pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" Jan 27 12:20:04 crc kubenswrapper[4745]: I0127 12:20:04.187640 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgm5w\" (UniqueName: \"kubernetes.io/projected/a85aa657-c018-41b0-babd-dd8f47616d8e-kube-api-access-zgm5w\") pod \"controller-manager-7b5ccb76d9-bhzq2\" (UID: \"a85aa657-c018-41b0-babd-dd8f47616d8e\") " pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" Jan 27 12:20:04 crc kubenswrapper[4745]: I0127 12:20:04.250714 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" Jan 27 12:20:04 crc kubenswrapper[4745]: I0127 12:20:04.717856 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2"] Jan 27 12:20:04 crc kubenswrapper[4745]: W0127 12:20:04.721698 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda85aa657_c018_41b0_babd_dd8f47616d8e.slice/crio-040a3056b916d4e1e3ba3ca56c3c410c4647f02075820d80effd2669ce59d5fb WatchSource:0}: Error finding container 040a3056b916d4e1e3ba3ca56c3c410c4647f02075820d80effd2669ce59d5fb: Status 404 returned error can't find the container with id 040a3056b916d4e1e3ba3ca56c3c410c4647f02075820d80effd2669ce59d5fb Jan 27 12:20:05 crc kubenswrapper[4745]: I0127 12:20:05.228501 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" event={"ID":"a85aa657-c018-41b0-babd-dd8f47616d8e","Type":"ContainerStarted","Data":"f3c765c760240579972129d8eac4ad45c60f36f34699cd3f0a992a60eb50a195"} Jan 27 12:20:05 crc kubenswrapper[4745]: I0127 12:20:05.230106 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" event={"ID":"a85aa657-c018-41b0-babd-dd8f47616d8e","Type":"ContainerStarted","Data":"040a3056b916d4e1e3ba3ca56c3c410c4647f02075820d80effd2669ce59d5fb"} Jan 27 12:20:05 crc kubenswrapper[4745]: I0127 12:20:05.230204 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" Jan 27 12:20:05 crc kubenswrapper[4745]: I0127 12:20:05.234929 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" Jan 27 12:20:05 crc kubenswrapper[4745]: I0127 12:20:05.251050 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b5ccb76d9-bhzq2" podStartSLOduration=9.251033082 podStartE2EDuration="9.251033082s" podCreationTimestamp="2026-01-27 12:19:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:20:05.248699234 +0000 UTC m=+498.053609922" watchObservedRunningTime="2026-01-27 12:20:05.251033082 +0000 UTC m=+498.055943770" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.944727 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p"] Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.945894 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.948686 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.949425 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.949680 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.950893 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.951456 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.951706 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.951696 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.952267 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.952390 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.952400 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.952890 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.953226 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.960550 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.960611 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.960649 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-system-session\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.960685 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.960713 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/708f4b4f-075c-4524-98f3-6cd7d9a12abf-audit-policies\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.960741 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.960764 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/708f4b4f-075c-4524-98f3-6cd7d9a12abf-audit-dir\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.960791 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-user-template-error\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.960844 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.960888 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrfjz\" (UniqueName: \"kubernetes.io/projected/708f4b4f-075c-4524-98f3-6cd7d9a12abf-kube-api-access-xrfjz\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.960914 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.960951 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.960978 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-user-template-login\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.961004 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.967575 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.967745 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.977102 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 12:20:09 crc kubenswrapper[4745]: I0127 12:20:09.977364 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p"] Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.062162 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.062226 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/708f4b4f-075c-4524-98f3-6cd7d9a12abf-audit-policies\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.062267 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.062291 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/708f4b4f-075c-4524-98f3-6cd7d9a12abf-audit-dir\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.062315 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-user-template-error\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.062341 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.062367 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrfjz\" (UniqueName: \"kubernetes.io/projected/708f4b4f-075c-4524-98f3-6cd7d9a12abf-kube-api-access-xrfjz\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.062383 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.062410 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.062453 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-user-template-login\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.062471 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.062492 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.062507 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.062529 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-system-session\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.064067 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/708f4b4f-075c-4524-98f3-6cd7d9a12abf-audit-dir\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.064328 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-system-service-ca\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.064626 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/708f4b4f-075c-4524-98f3-6cd7d9a12abf-audit-policies\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.065582 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.068686 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.071259 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-system-router-certs\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.071287 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-system-session\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.071284 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.071461 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.071947 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.073597 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.074171 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-user-template-login\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.080027 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/708f4b4f-075c-4524-98f3-6cd7d9a12abf-v4-0-config-user-template-error\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.083301 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrfjz\" (UniqueName: \"kubernetes.io/projected/708f4b4f-075c-4524-98f3-6cd7d9a12abf-kube-api-access-xrfjz\") pod \"oauth-openshift-5dd6bb8bb7-sqw7p\" (UID: \"708f4b4f-075c-4524-98f3-6cd7d9a12abf\") " pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.273545 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:10 crc kubenswrapper[4745]: I0127 12:20:10.757854 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p"] Jan 27 12:20:11 crc kubenswrapper[4745]: I0127 12:20:11.265531 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" event={"ID":"708f4b4f-075c-4524-98f3-6cd7d9a12abf","Type":"ContainerStarted","Data":"943e174028266074d467b4986153c4bb5ff03ce17d9a1ef27bca36f0cda13696"} Jan 27 12:20:11 crc kubenswrapper[4745]: I0127 12:20:11.265597 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" event={"ID":"708f4b4f-075c-4524-98f3-6cd7d9a12abf","Type":"ContainerStarted","Data":"663ca24115522393af74d2f7a2d4b9f560c51df9e2fca68cfffe7092fbb868e9"} Jan 27 12:20:11 crc kubenswrapper[4745]: I0127 12:20:11.266153 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:11 crc kubenswrapper[4745]: I0127 12:20:11.301733 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" podStartSLOduration=35.30166975 podStartE2EDuration="35.30166975s" podCreationTimestamp="2026-01-27 12:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:20:11.288324232 +0000 UTC m=+504.093234930" watchObservedRunningTime="2026-01-27 12:20:11.30166975 +0000 UTC m=+504.106580468" Jan 27 12:20:11 crc kubenswrapper[4745]: I0127 12:20:11.508373 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5dd6bb8bb7-sqw7p" Jan 27 12:20:55 crc kubenswrapper[4745]: I0127 12:20:55.476186 4745 scope.go:117] "RemoveContainer" containerID="adefb782983e85e4bc44cb847f322f8c1acb6f2e092dedc3da31c9650ea26193" Jan 27 12:20:55 crc kubenswrapper[4745]: I0127 12:20:55.492692 4745 scope.go:117] "RemoveContainer" containerID="7da42ada67eb4adfeffd31c903ae1d8cf259f32f62fb3a7ff17fcd50e65d9357" Jan 27 12:22:05 crc kubenswrapper[4745]: I0127 12:22:05.967451 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:22:05 crc kubenswrapper[4745]: I0127 12:22:05.968410 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:22:35 crc kubenswrapper[4745]: I0127 12:22:35.967854 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:22:35 crc kubenswrapper[4745]: I0127 12:22:35.968393 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:22:55 crc kubenswrapper[4745]: I0127 12:22:55.560957 4745 scope.go:117] "RemoveContainer" containerID="2bc895b11b266e2683da00535afeea4f601fe625bea42c0fb7443934f74f12ab" Jan 27 12:23:05 crc kubenswrapper[4745]: I0127 12:23:05.967386 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:23:05 crc kubenswrapper[4745]: I0127 12:23:05.967759 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:23:05 crc kubenswrapper[4745]: I0127 12:23:05.967969 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:23:05 crc kubenswrapper[4745]: I0127 12:23:05.968722 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d165911e4b0e3c594550d27c4ba050dc9d70726bdb582f151b9ce410a17826f"} pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 12:23:05 crc kubenswrapper[4745]: I0127 12:23:05.968781 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" containerID="cri-o://1d165911e4b0e3c594550d27c4ba050dc9d70726bdb582f151b9ce410a17826f" gracePeriod=600 Jan 27 12:23:06 crc kubenswrapper[4745]: I0127 12:23:06.343447 4745 generic.go:334] "Generic (PLEG): container finished" podID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerID="1d165911e4b0e3c594550d27c4ba050dc9d70726bdb582f151b9ce410a17826f" exitCode=0 Jan 27 12:23:06 crc kubenswrapper[4745]: I0127 12:23:06.343731 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerDied","Data":"1d165911e4b0e3c594550d27c4ba050dc9d70726bdb582f151b9ce410a17826f"} Jan 27 12:23:06 crc kubenswrapper[4745]: I0127 12:23:06.343756 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerStarted","Data":"0a05fb56de3f4f4964f4b329a07b5860a6f3e32e5425eaf7e81fdbe26e1e74c6"} Jan 27 12:23:06 crc kubenswrapper[4745]: I0127 12:23:06.343772 4745 scope.go:117] "RemoveContainer" containerID="99d92afaff4d5da46033fc226ce0aba0f0ba990de6f690349b869b38b7d1aea9" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.216759 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gqlqf"] Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.218687 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.244421 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gqlqf"] Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.345040 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.345096 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/02627500-8d53-4290-b4c6-06d641348bb5-registry-tls\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.345131 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/02627500-8d53-4290-b4c6-06d641348bb5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.345150 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/02627500-8d53-4290-b4c6-06d641348bb5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.345185 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02627500-8d53-4290-b4c6-06d641348bb5-trusted-ca\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.345233 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxbvk\" (UniqueName: \"kubernetes.io/projected/02627500-8d53-4290-b4c6-06d641348bb5-kube-api-access-jxbvk\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.345255 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/02627500-8d53-4290-b4c6-06d641348bb5-bound-sa-token\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.345278 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/02627500-8d53-4290-b4c6-06d641348bb5-registry-certificates\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.364751 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.446790 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/02627500-8d53-4290-b4c6-06d641348bb5-registry-tls\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.447098 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/02627500-8d53-4290-b4c6-06d641348bb5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.447116 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/02627500-8d53-4290-b4c6-06d641348bb5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.447147 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02627500-8d53-4290-b4c6-06d641348bb5-trusted-ca\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.447180 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxbvk\" (UniqueName: \"kubernetes.io/projected/02627500-8d53-4290-b4c6-06d641348bb5-kube-api-access-jxbvk\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.447197 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/02627500-8d53-4290-b4c6-06d641348bb5-bound-sa-token\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.447223 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/02627500-8d53-4290-b4c6-06d641348bb5-registry-certificates\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.447689 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/02627500-8d53-4290-b4c6-06d641348bb5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.448596 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/02627500-8d53-4290-b4c6-06d641348bb5-registry-certificates\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.448597 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02627500-8d53-4290-b4c6-06d641348bb5-trusted-ca\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.452185 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/02627500-8d53-4290-b4c6-06d641348bb5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.454659 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/02627500-8d53-4290-b4c6-06d641348bb5-registry-tls\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.462457 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxbvk\" (UniqueName: \"kubernetes.io/projected/02627500-8d53-4290-b4c6-06d641348bb5-kube-api-access-jxbvk\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.465274 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/02627500-8d53-4290-b4c6-06d641348bb5-bound-sa-token\") pod \"image-registry-66df7c8f76-gqlqf\" (UID: \"02627500-8d53-4290-b4c6-06d641348bb5\") " pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.620960 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:46 crc kubenswrapper[4745]: I0127 12:23:46.821216 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gqlqf"] Jan 27 12:23:47 crc kubenswrapper[4745]: I0127 12:23:47.603599 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" event={"ID":"02627500-8d53-4290-b4c6-06d641348bb5","Type":"ContainerStarted","Data":"80c16a32e4bfe04adcef93134999ea21bd3d17ef604342536b1b82da66c2f471"} Jan 27 12:23:47 crc kubenswrapper[4745]: I0127 12:23:47.603647 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" event={"ID":"02627500-8d53-4290-b4c6-06d641348bb5","Type":"ContainerStarted","Data":"df71de869de3896fbd5e36116d5af98a864556d3156e9de13c4c510df7154063"} Jan 27 12:23:47 crc kubenswrapper[4745]: I0127 12:23:47.603879 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:23:47 crc kubenswrapper[4745]: I0127 12:23:47.639282 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" podStartSLOduration=1.639257497 podStartE2EDuration="1.639257497s" podCreationTimestamp="2026-01-27 12:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:23:47.629005274 +0000 UTC m=+720.433916002" watchObservedRunningTime="2026-01-27 12:23:47.639257497 +0000 UTC m=+720.444168205" Jan 27 12:23:55 crc kubenswrapper[4745]: I0127 12:23:55.612526 4745 scope.go:117] "RemoveContainer" containerID="1c1ca6841297b0075b5ad02fc7f84c079ae0dcbc97fbc61a6c7507b74306916c" Jan 27 12:23:55 crc kubenswrapper[4745]: I0127 12:23:55.653479 4745 scope.go:117] "RemoveContainer" containerID="b0916971a50047bc3ecf82a5e73970103b735a529c0ef23324cbb90cbed42099" Jan 27 12:24:06 crc kubenswrapper[4745]: I0127 12:24:06.625159 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-gqlqf" Jan 27 12:24:06 crc kubenswrapper[4745]: I0127 12:24:06.673268 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hhfbt"] Jan 27 12:24:31 crc kubenswrapper[4745]: I0127 12:24:31.712127 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" podUID="67cab9e2-eb12-495b-a350-8fc0886c1a29" containerName="registry" containerID="cri-o://475cfc31343bd26b863b736c1138584df1a440e7a4225414e8b1f52a56c3d700" gracePeriod=30 Jan 27 12:24:31 crc kubenswrapper[4745]: I0127 12:24:31.886130 4745 generic.go:334] "Generic (PLEG): container finished" podID="67cab9e2-eb12-495b-a350-8fc0886c1a29" containerID="475cfc31343bd26b863b736c1138584df1a440e7a4225414e8b1f52a56c3d700" exitCode=0 Jan 27 12:24:31 crc kubenswrapper[4745]: I0127 12:24:31.886194 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" event={"ID":"67cab9e2-eb12-495b-a350-8fc0886c1a29","Type":"ContainerDied","Data":"475cfc31343bd26b863b736c1138584df1a440e7a4225414e8b1f52a56c3d700"} Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.110387 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.189506 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/67cab9e2-eb12-495b-a350-8fc0886c1a29-ca-trust-extracted\") pod \"67cab9e2-eb12-495b-a350-8fc0886c1a29\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.189557 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67cab9e2-eb12-495b-a350-8fc0886c1a29-trusted-ca\") pod \"67cab9e2-eb12-495b-a350-8fc0886c1a29\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.189581 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/67cab9e2-eb12-495b-a350-8fc0886c1a29-registry-tls\") pod \"67cab9e2-eb12-495b-a350-8fc0886c1a29\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.189729 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/67cab9e2-eb12-495b-a350-8fc0886c1a29-registry-certificates\") pod \"67cab9e2-eb12-495b-a350-8fc0886c1a29\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.189756 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/67cab9e2-eb12-495b-a350-8fc0886c1a29-installation-pull-secrets\") pod \"67cab9e2-eb12-495b-a350-8fc0886c1a29\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.189940 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"67cab9e2-eb12-495b-a350-8fc0886c1a29\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.189972 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67cab9e2-eb12-495b-a350-8fc0886c1a29-bound-sa-token\") pod \"67cab9e2-eb12-495b-a350-8fc0886c1a29\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.189998 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wk2d\" (UniqueName: \"kubernetes.io/projected/67cab9e2-eb12-495b-a350-8fc0886c1a29-kube-api-access-4wk2d\") pod \"67cab9e2-eb12-495b-a350-8fc0886c1a29\" (UID: \"67cab9e2-eb12-495b-a350-8fc0886c1a29\") " Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.190754 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67cab9e2-eb12-495b-a350-8fc0886c1a29-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "67cab9e2-eb12-495b-a350-8fc0886c1a29" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.190853 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67cab9e2-eb12-495b-a350-8fc0886c1a29-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "67cab9e2-eb12-495b-a350-8fc0886c1a29" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.197784 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67cab9e2-eb12-495b-a350-8fc0886c1a29-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "67cab9e2-eb12-495b-a350-8fc0886c1a29" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.198217 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67cab9e2-eb12-495b-a350-8fc0886c1a29-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "67cab9e2-eb12-495b-a350-8fc0886c1a29" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.198265 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67cab9e2-eb12-495b-a350-8fc0886c1a29-kube-api-access-4wk2d" (OuterVolumeSpecName: "kube-api-access-4wk2d") pod "67cab9e2-eb12-495b-a350-8fc0886c1a29" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29"). InnerVolumeSpecName "kube-api-access-4wk2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.201456 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67cab9e2-eb12-495b-a350-8fc0886c1a29-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "67cab9e2-eb12-495b-a350-8fc0886c1a29" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.209772 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67cab9e2-eb12-495b-a350-8fc0886c1a29-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "67cab9e2-eb12-495b-a350-8fc0886c1a29" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.211292 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "67cab9e2-eb12-495b-a350-8fc0886c1a29" (UID: "67cab9e2-eb12-495b-a350-8fc0886c1a29"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.291023 4745 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/67cab9e2-eb12-495b-a350-8fc0886c1a29-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.291081 4745 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/67cab9e2-eb12-495b-a350-8fc0886c1a29-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.291094 4745 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67cab9e2-eb12-495b-a350-8fc0886c1a29-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.291105 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wk2d\" (UniqueName: \"kubernetes.io/projected/67cab9e2-eb12-495b-a350-8fc0886c1a29-kube-api-access-4wk2d\") on node \"crc\" DevicePath \"\"" Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.291119 4745 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/67cab9e2-eb12-495b-a350-8fc0886c1a29-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.291133 4745 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67cab9e2-eb12-495b-a350-8fc0886c1a29-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.291146 4745 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/67cab9e2-eb12-495b-a350-8fc0886c1a29-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.894991 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" event={"ID":"67cab9e2-eb12-495b-a350-8fc0886c1a29","Type":"ContainerDied","Data":"dcf811754774e4dbe299301694064282617947e44166cc956163eda53be1b36c"} Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.895078 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hhfbt" Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.895080 4745 scope.go:117] "RemoveContainer" containerID="475cfc31343bd26b863b736c1138584df1a440e7a4225414e8b1f52a56c3d700" Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.942980 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hhfbt"] Jan 27 12:24:32 crc kubenswrapper[4745]: I0127 12:24:32.948012 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hhfbt"] Jan 27 12:24:34 crc kubenswrapper[4745]: I0127 12:24:34.085712 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67cab9e2-eb12-495b-a350-8fc0886c1a29" path="/var/lib/kubelet/pods/67cab9e2-eb12-495b-a350-8fc0886c1a29/volumes" Jan 27 12:24:41 crc kubenswrapper[4745]: I0127 12:24:41.141246 4745 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.532279 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-g7rqb"] Jan 27 12:24:49 crc kubenswrapper[4745]: E0127 12:24:49.532735 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67cab9e2-eb12-495b-a350-8fc0886c1a29" containerName="registry" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.532752 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="67cab9e2-eb12-495b-a350-8fc0886c1a29" containerName="registry" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.532882 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="67cab9e2-eb12-495b-a350-8fc0886c1a29" containerName="registry" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.533307 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-g7rqb" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.537219 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.537456 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.539852 4745 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-zcxns" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.548278 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-g7rqb"] Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.554549 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-286zr"] Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.555365 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-286zr" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.560177 4745 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-nxlrq" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.575591 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-jr2pj"] Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.576458 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jr2pj" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.580611 4745 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-6s25x" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.581787 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-286zr"] Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.587835 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-jr2pj"] Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.609374 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwxrx\" (UniqueName: \"kubernetes.io/projected/29719123-511c-4ab3-80e0-956a42bbce47-kube-api-access-jwxrx\") pod \"cert-manager-858654f9db-286zr\" (UID: \"29719123-511c-4ab3-80e0-956a42bbce47\") " pod="cert-manager/cert-manager-858654f9db-286zr" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.609482 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkpfv\" (UniqueName: \"kubernetes.io/projected/a95c82a2-1ac8-49c2-a42d-9c597f532783-kube-api-access-kkpfv\") pod \"cert-manager-webhook-687f57d79b-jr2pj\" (UID: \"a95c82a2-1ac8-49c2-a42d-9c597f532783\") " pod="cert-manager/cert-manager-webhook-687f57d79b-jr2pj" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.609517 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnsc7\" (UniqueName: \"kubernetes.io/projected/991f26a3-5089-44a9-99e5-b3690b308b23-kube-api-access-tnsc7\") pod \"cert-manager-cainjector-cf98fcc89-g7rqb\" (UID: \"991f26a3-5089-44a9-99e5-b3690b308b23\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-g7rqb" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.710695 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkpfv\" (UniqueName: \"kubernetes.io/projected/a95c82a2-1ac8-49c2-a42d-9c597f532783-kube-api-access-kkpfv\") pod \"cert-manager-webhook-687f57d79b-jr2pj\" (UID: \"a95c82a2-1ac8-49c2-a42d-9c597f532783\") " pod="cert-manager/cert-manager-webhook-687f57d79b-jr2pj" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.710751 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnsc7\" (UniqueName: \"kubernetes.io/projected/991f26a3-5089-44a9-99e5-b3690b308b23-kube-api-access-tnsc7\") pod \"cert-manager-cainjector-cf98fcc89-g7rqb\" (UID: \"991f26a3-5089-44a9-99e5-b3690b308b23\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-g7rqb" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.710784 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwxrx\" (UniqueName: \"kubernetes.io/projected/29719123-511c-4ab3-80e0-956a42bbce47-kube-api-access-jwxrx\") pod \"cert-manager-858654f9db-286zr\" (UID: \"29719123-511c-4ab3-80e0-956a42bbce47\") " pod="cert-manager/cert-manager-858654f9db-286zr" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.727931 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnsc7\" (UniqueName: \"kubernetes.io/projected/991f26a3-5089-44a9-99e5-b3690b308b23-kube-api-access-tnsc7\") pod \"cert-manager-cainjector-cf98fcc89-g7rqb\" (UID: \"991f26a3-5089-44a9-99e5-b3690b308b23\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-g7rqb" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.729610 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwxrx\" (UniqueName: \"kubernetes.io/projected/29719123-511c-4ab3-80e0-956a42bbce47-kube-api-access-jwxrx\") pod \"cert-manager-858654f9db-286zr\" (UID: \"29719123-511c-4ab3-80e0-956a42bbce47\") " pod="cert-manager/cert-manager-858654f9db-286zr" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.730335 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkpfv\" (UniqueName: \"kubernetes.io/projected/a95c82a2-1ac8-49c2-a42d-9c597f532783-kube-api-access-kkpfv\") pod \"cert-manager-webhook-687f57d79b-jr2pj\" (UID: \"a95c82a2-1ac8-49c2-a42d-9c597f532783\") " pod="cert-manager/cert-manager-webhook-687f57d79b-jr2pj" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.847619 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-g7rqb" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.872605 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-286zr" Jan 27 12:24:49 crc kubenswrapper[4745]: I0127 12:24:49.891324 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jr2pj" Jan 27 12:24:50 crc kubenswrapper[4745]: I0127 12:24:50.179897 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-jr2pj"] Jan 27 12:24:50 crc kubenswrapper[4745]: I0127 12:24:50.185064 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 12:24:50 crc kubenswrapper[4745]: I0127 12:24:50.278040 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-g7rqb"] Jan 27 12:24:50 crc kubenswrapper[4745]: W0127 12:24:50.285459 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod991f26a3_5089_44a9_99e5_b3690b308b23.slice/crio-238110e882b51106d60a8be7bd1378e1f2a99d7e5ad7c234810363e3dae21bd5 WatchSource:0}: Error finding container 238110e882b51106d60a8be7bd1378e1f2a99d7e5ad7c234810363e3dae21bd5: Status 404 returned error can't find the container with id 238110e882b51106d60a8be7bd1378e1f2a99d7e5ad7c234810363e3dae21bd5 Jan 27 12:24:50 crc kubenswrapper[4745]: I0127 12:24:50.333586 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-286zr"] Jan 27 12:24:50 crc kubenswrapper[4745]: W0127 12:24:50.338450 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29719123_511c_4ab3_80e0_956a42bbce47.slice/crio-5136bc145632f8f389e9cecd8981f311e375291ec448589b3dde817ba1414174 WatchSource:0}: Error finding container 5136bc145632f8f389e9cecd8981f311e375291ec448589b3dde817ba1414174: Status 404 returned error can't find the container with id 5136bc145632f8f389e9cecd8981f311e375291ec448589b3dde817ba1414174 Jan 27 12:24:51 crc kubenswrapper[4745]: I0127 12:24:51.017865 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-jr2pj" event={"ID":"a95c82a2-1ac8-49c2-a42d-9c597f532783","Type":"ContainerStarted","Data":"28b0d1e13193210d261e624b4fdf85ce7ceb31b250c1b473a97802d5b39b0588"} Jan 27 12:24:51 crc kubenswrapper[4745]: I0127 12:24:51.021525 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-286zr" event={"ID":"29719123-511c-4ab3-80e0-956a42bbce47","Type":"ContainerStarted","Data":"5136bc145632f8f389e9cecd8981f311e375291ec448589b3dde817ba1414174"} Jan 27 12:24:51 crc kubenswrapper[4745]: I0127 12:24:51.023651 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-g7rqb" event={"ID":"991f26a3-5089-44a9-99e5-b3690b308b23","Type":"ContainerStarted","Data":"238110e882b51106d60a8be7bd1378e1f2a99d7e5ad7c234810363e3dae21bd5"} Jan 27 12:24:56 crc kubenswrapper[4745]: I0127 12:24:56.067355 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-286zr" event={"ID":"29719123-511c-4ab3-80e0-956a42bbce47","Type":"ContainerStarted","Data":"921de5ae8ad071439d84a347a60a237b527d8e377f529ae63985dfae9a89ba02"} Jan 27 12:24:56 crc kubenswrapper[4745]: I0127 12:24:56.070160 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-jr2pj" event={"ID":"a95c82a2-1ac8-49c2-a42d-9c597f532783","Type":"ContainerStarted","Data":"005223f9dd32c4efbf54cfa4979ff97f34b58630bfa07cf178aaa74038b1c1aa"} Jan 27 12:24:56 crc kubenswrapper[4745]: I0127 12:24:56.070304 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-jr2pj" Jan 27 12:24:56 crc kubenswrapper[4745]: I0127 12:24:56.086717 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-286zr" podStartSLOduration=1.681746268 podStartE2EDuration="7.086704479s" podCreationTimestamp="2026-01-27 12:24:49 +0000 UTC" firstStartedPulling="2026-01-27 12:24:50.341097426 +0000 UTC m=+783.146008114" lastFinishedPulling="2026-01-27 12:24:55.746055627 +0000 UTC m=+788.550966325" observedRunningTime="2026-01-27 12:24:56.084771582 +0000 UTC m=+788.889682280" watchObservedRunningTime="2026-01-27 12:24:56.086704479 +0000 UTC m=+788.891615167" Jan 27 12:24:56 crc kubenswrapper[4745]: I0127 12:24:56.104990 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-jr2pj" podStartSLOduration=1.542213692 podStartE2EDuration="7.10497474s" podCreationTimestamp="2026-01-27 12:24:49 +0000 UTC" firstStartedPulling="2026-01-27 12:24:50.184885016 +0000 UTC m=+782.989795704" lastFinishedPulling="2026-01-27 12:24:55.747646064 +0000 UTC m=+788.552556752" observedRunningTime="2026-01-27 12:24:56.100271351 +0000 UTC m=+788.905182039" watchObservedRunningTime="2026-01-27 12:24:56.10497474 +0000 UTC m=+788.909885428" Jan 27 12:24:57 crc kubenswrapper[4745]: I0127 12:24:57.078478 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-g7rqb" event={"ID":"991f26a3-5089-44a9-99e5-b3690b308b23","Type":"ContainerStarted","Data":"5ba8299a146870335eee22cf1b327c25992ce1fa763c91b33d03a1468d91e476"} Jan 27 12:24:57 crc kubenswrapper[4745]: I0127 12:24:57.103933 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-g7rqb" podStartSLOduration=2.464531585 podStartE2EDuration="8.103902658s" podCreationTimestamp="2026-01-27 12:24:49 +0000 UTC" firstStartedPulling="2026-01-27 12:24:50.287510631 +0000 UTC m=+783.092421309" lastFinishedPulling="2026-01-27 12:24:55.926881694 +0000 UTC m=+788.731792382" observedRunningTime="2026-01-27 12:24:57.092910513 +0000 UTC m=+789.897821241" watchObservedRunningTime="2026-01-27 12:24:57.103902658 +0000 UTC m=+789.908813376" Jan 27 12:24:59 crc kubenswrapper[4745]: I0127 12:24:59.295795 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bnfh4"] Jan 27 12:24:59 crc kubenswrapper[4745]: I0127 12:24:59.298847 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovn-controller" containerID="cri-o://2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544" gracePeriod=30 Jan 27 12:24:59 crc kubenswrapper[4745]: I0127 12:24:59.299047 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="sbdb" containerID="cri-o://93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01" gracePeriod=30 Jan 27 12:24:59 crc kubenswrapper[4745]: I0127 12:24:59.299155 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="nbdb" containerID="cri-o://d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1" gracePeriod=30 Jan 27 12:24:59 crc kubenswrapper[4745]: I0127 12:24:59.299254 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="northd" containerID="cri-o://3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf" gracePeriod=30 Jan 27 12:24:59 crc kubenswrapper[4745]: I0127 12:24:59.299371 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51" gracePeriod=30 Jan 27 12:24:59 crc kubenswrapper[4745]: I0127 12:24:59.299489 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="kube-rbac-proxy-node" containerID="cri-o://d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce" gracePeriod=30 Jan 27 12:24:59 crc kubenswrapper[4745]: I0127 12:24:59.299588 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovn-acl-logging" containerID="cri-o://37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56" gracePeriod=30 Jan 27 12:24:59 crc kubenswrapper[4745]: I0127 12:24:59.346676 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovnkube-controller" containerID="cri-o://60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8" gracePeriod=30 Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.043051 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnfh4_26b1987b-69bb-4768-a874-5a97b3327469/ovnkube-controller/3.log" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.045646 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnfh4_26b1987b-69bb-4768-a874-5a97b3327469/ovn-acl-logging/0.log" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.046308 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnfh4_26b1987b-69bb-4768-a874-5a97b3327469/ovn-controller/0.log" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.046793 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.067920 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-run-ovn-kubernetes\") pod \"26b1987b-69bb-4768-a874-5a97b3327469\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.067962 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-var-lib-openvswitch\") pod \"26b1987b-69bb-4768-a874-5a97b3327469\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.067991 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/26b1987b-69bb-4768-a874-5a97b3327469-env-overrides\") pod \"26b1987b-69bb-4768-a874-5a97b3327469\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068016 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-slash\") pod \"26b1987b-69bb-4768-a874-5a97b3327469\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068037 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-kubelet\") pod \"26b1987b-69bb-4768-a874-5a97b3327469\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068065 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8whl8\" (UniqueName: \"kubernetes.io/projected/26b1987b-69bb-4768-a874-5a97b3327469-kube-api-access-8whl8\") pod \"26b1987b-69bb-4768-a874-5a97b3327469\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068117 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/26b1987b-69bb-4768-a874-5a97b3327469-ovn-node-metrics-cert\") pod \"26b1987b-69bb-4768-a874-5a97b3327469\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068156 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-systemd-units\") pod \"26b1987b-69bb-4768-a874-5a97b3327469\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068206 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/26b1987b-69bb-4768-a874-5a97b3327469-ovnkube-config\") pod \"26b1987b-69bb-4768-a874-5a97b3327469\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068191 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "26b1987b-69bb-4768-a874-5a97b3327469" (UID: "26b1987b-69bb-4768-a874-5a97b3327469"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068216 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-slash" (OuterVolumeSpecName: "host-slash") pod "26b1987b-69bb-4768-a874-5a97b3327469" (UID: "26b1987b-69bb-4768-a874-5a97b3327469"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068234 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-run-netns\") pod \"26b1987b-69bb-4768-a874-5a97b3327469\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068268 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "26b1987b-69bb-4768-a874-5a97b3327469" (UID: "26b1987b-69bb-4768-a874-5a97b3327469"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068312 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "26b1987b-69bb-4768-a874-5a97b3327469" (UID: "26b1987b-69bb-4768-a874-5a97b3327469"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068293 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-etc-openvswitch\") pod \"26b1987b-69bb-4768-a874-5a97b3327469\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068347 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "26b1987b-69bb-4768-a874-5a97b3327469" (UID: "26b1987b-69bb-4768-a874-5a97b3327469"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068364 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-run-ovn\") pod \"26b1987b-69bb-4768-a874-5a97b3327469\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068394 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-log-socket\") pod \"26b1987b-69bb-4768-a874-5a97b3327469\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068432 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-cni-bin\") pod \"26b1987b-69bb-4768-a874-5a97b3327469\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068465 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-run-systemd\") pod \"26b1987b-69bb-4768-a874-5a97b3327469\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068497 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-node-log\") pod \"26b1987b-69bb-4768-a874-5a97b3327469\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068525 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-run-openvswitch\") pod \"26b1987b-69bb-4768-a874-5a97b3327469\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068574 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/26b1987b-69bb-4768-a874-5a97b3327469-ovnkube-script-lib\") pod \"26b1987b-69bb-4768-a874-5a97b3327469\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068604 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-cni-netd\") pod \"26b1987b-69bb-4768-a874-5a97b3327469\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068633 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-var-lib-cni-networks-ovn-kubernetes\") pod \"26b1987b-69bb-4768-a874-5a97b3327469\" (UID: \"26b1987b-69bb-4768-a874-5a97b3327469\") " Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068765 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26b1987b-69bb-4768-a874-5a97b3327469-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "26b1987b-69bb-4768-a874-5a97b3327469" (UID: "26b1987b-69bb-4768-a874-5a97b3327469"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068892 4745 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/26b1987b-69bb-4768-a874-5a97b3327469-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068914 4745 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068930 4745 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068944 4745 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068959 4745 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068973 4745 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.069013 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-node-log" (OuterVolumeSpecName: "node-log") pod "26b1987b-69bb-4768-a874-5a97b3327469" (UID: "26b1987b-69bb-4768-a874-5a97b3327469"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.069081 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "26b1987b-69bb-4768-a874-5a97b3327469" (UID: "26b1987b-69bb-4768-a874-5a97b3327469"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.068109 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "26b1987b-69bb-4768-a874-5a97b3327469" (UID: "26b1987b-69bb-4768-a874-5a97b3327469"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.069232 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "26b1987b-69bb-4768-a874-5a97b3327469" (UID: "26b1987b-69bb-4768-a874-5a97b3327469"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.069493 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "26b1987b-69bb-4768-a874-5a97b3327469" (UID: "26b1987b-69bb-4768-a874-5a97b3327469"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.069531 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "26b1987b-69bb-4768-a874-5a97b3327469" (UID: "26b1987b-69bb-4768-a874-5a97b3327469"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.069530 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-log-socket" (OuterVolumeSpecName: "log-socket") pod "26b1987b-69bb-4768-a874-5a97b3327469" (UID: "26b1987b-69bb-4768-a874-5a97b3327469"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.069562 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "26b1987b-69bb-4768-a874-5a97b3327469" (UID: "26b1987b-69bb-4768-a874-5a97b3327469"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.069675 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "26b1987b-69bb-4768-a874-5a97b3327469" (UID: "26b1987b-69bb-4768-a874-5a97b3327469"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.069617 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26b1987b-69bb-4768-a874-5a97b3327469-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "26b1987b-69bb-4768-a874-5a97b3327469" (UID: "26b1987b-69bb-4768-a874-5a97b3327469"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.069873 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26b1987b-69bb-4768-a874-5a97b3327469-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "26b1987b-69bb-4768-a874-5a97b3327469" (UID: "26b1987b-69bb-4768-a874-5a97b3327469"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.075708 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26b1987b-69bb-4768-a874-5a97b3327469-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "26b1987b-69bb-4768-a874-5a97b3327469" (UID: "26b1987b-69bb-4768-a874-5a97b3327469"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.076741 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26b1987b-69bb-4768-a874-5a97b3327469-kube-api-access-8whl8" (OuterVolumeSpecName: "kube-api-access-8whl8") pod "26b1987b-69bb-4768-a874-5a97b3327469" (UID: "26b1987b-69bb-4768-a874-5a97b3327469"). InnerVolumeSpecName "kube-api-access-8whl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.110598 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnfh4_26b1987b-69bb-4768-a874-5a97b3327469/ovnkube-controller/3.log" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.113651 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "26b1987b-69bb-4768-a874-5a97b3327469" (UID: "26b1987b-69bb-4768-a874-5a97b3327469"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.121280 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnfh4_26b1987b-69bb-4768-a874-5a97b3327469/ovn-acl-logging/0.log" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.122645 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnfh4_26b1987b-69bb-4768-a874-5a97b3327469/ovn-controller/0.log" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123225 4745 generic.go:334] "Generic (PLEG): container finished" podID="26b1987b-69bb-4768-a874-5a97b3327469" containerID="60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8" exitCode=0 Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123262 4745 generic.go:334] "Generic (PLEG): container finished" podID="26b1987b-69bb-4768-a874-5a97b3327469" containerID="93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01" exitCode=0 Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123271 4745 generic.go:334] "Generic (PLEG): container finished" podID="26b1987b-69bb-4768-a874-5a97b3327469" containerID="d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1" exitCode=0 Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123280 4745 generic.go:334] "Generic (PLEG): container finished" podID="26b1987b-69bb-4768-a874-5a97b3327469" containerID="3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf" exitCode=0 Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123288 4745 generic.go:334] "Generic (PLEG): container finished" podID="26b1987b-69bb-4768-a874-5a97b3327469" containerID="52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51" exitCode=0 Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123296 4745 generic.go:334] "Generic (PLEG): container finished" podID="26b1987b-69bb-4768-a874-5a97b3327469" containerID="d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce" exitCode=0 Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123304 4745 generic.go:334] "Generic (PLEG): container finished" podID="26b1987b-69bb-4768-a874-5a97b3327469" containerID="37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56" exitCode=143 Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123316 4745 generic.go:334] "Generic (PLEG): container finished" podID="26b1987b-69bb-4768-a874-5a97b3327469" containerID="2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544" exitCode=143 Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123350 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123409 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerDied","Data":"60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123452 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerDied","Data":"93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123470 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerDied","Data":"d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123485 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerDied","Data":"3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123503 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerDied","Data":"52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123525 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerDied","Data":"d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123542 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123558 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123568 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123577 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123587 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123596 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123605 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123614 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123623 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123635 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerDied","Data":"37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123648 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123660 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123671 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123681 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123690 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123698 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123707 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123748 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123761 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123771 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123786 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerDied","Data":"2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123800 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123840 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123850 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123857 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123865 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123872 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123880 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123886 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123893 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123901 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123912 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnfh4" event={"ID":"26b1987b-69bb-4768-a874-5a97b3327469","Type":"ContainerDied","Data":"497b03f46b89194ada5f7b6e50d63a3832cf7e8a6018995b5f5d73648f2dc301"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123923 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123931 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123938 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123945 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123952 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123959 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123967 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123974 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123980 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.123987 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.124019 4745 scope.go:117] "RemoveContainer" containerID="60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.129715 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-97hlh_c438e876-f4c1-42ca-b935-b5e58be9cfb2/kube-multus/2.log" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.130944 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-77dns"] Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.131173 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovnkube-controller" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131188 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovnkube-controller" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.131201 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovn-acl-logging" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131209 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovn-acl-logging" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.131226 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovn-controller" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131234 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovn-controller" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.131245 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="northd" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131255 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="northd" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.131264 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="nbdb" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131273 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="nbdb" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.131287 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="kubecfg-setup" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131295 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="kubecfg-setup" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.131304 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovnkube-controller" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131312 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovnkube-controller" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.131321 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="kube-rbac-proxy-node" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131329 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="kube-rbac-proxy-node" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.131340 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovnkube-controller" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131348 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovnkube-controller" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.131359 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131367 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.131379 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="sbdb" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131386 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="sbdb" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.131394 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovnkube-controller" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131401 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovnkube-controller" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131517 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="nbdb" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131530 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="sbdb" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131540 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovnkube-controller" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131548 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovnkube-controller" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131556 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovn-controller" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131566 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovnkube-controller" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131576 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="northd" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131585 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131596 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="kube-rbac-proxy-node" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131606 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovn-acl-logging" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131614 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovnkube-controller" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.131729 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovnkube-controller" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131740 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovnkube-controller" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.131879 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b1987b-69bb-4768-a874-5a97b3327469" containerName="ovnkube-controller" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.132444 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-97hlh_c438e876-f4c1-42ca-b935-b5e58be9cfb2/kube-multus/1.log" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.132548 4745 generic.go:334] "Generic (PLEG): container finished" podID="c438e876-f4c1-42ca-b935-b5e58be9cfb2" containerID="9e919995b5f66ba68c68e45fee6b3943248ac2b60f27245ab0acf28144661b43" exitCode=2 Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.133790 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-97hlh" event={"ID":"c438e876-f4c1-42ca-b935-b5e58be9cfb2","Type":"ContainerDied","Data":"9e919995b5f66ba68c68e45fee6b3943248ac2b60f27245ab0acf28144661b43"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.133844 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"baa0c592826e1a106ac51a6142fa5d252f837bafe03f4cdc94832dbb4704e9f3"} Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.133999 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.134295 4745 scope.go:117] "RemoveContainer" containerID="9e919995b5f66ba68c68e45fee6b3943248ac2b60f27245ab0acf28144661b43" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.136532 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.136589 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.136749 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.165355 4745 scope.go:117] "RemoveContainer" containerID="17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.170777 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-host-slash\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.170854 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0124f904-af6e-4db5-bd26-4a30bd6d5db7-ovnkube-script-lib\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.170914 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-etc-openvswitch\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.170976 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-node-log\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171016 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-run-ovn\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171058 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-host-run-netns\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171091 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-host-cni-netd\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171123 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-log-socket\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171161 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0124f904-af6e-4db5-bd26-4a30bd6d5db7-ovn-node-metrics-cert\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171201 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-run-openvswitch\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171358 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-host-cni-bin\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171390 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzvxt\" (UniqueName: \"kubernetes.io/projected/0124f904-af6e-4db5-bd26-4a30bd6d5db7-kube-api-access-kzvxt\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171419 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0124f904-af6e-4db5-bd26-4a30bd6d5db7-ovnkube-config\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171456 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-systemd-units\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171489 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-var-lib-openvswitch\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171524 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171583 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-host-run-ovn-kubernetes\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171622 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-run-systemd\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171661 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0124f904-af6e-4db5-bd26-4a30bd6d5db7-env-overrides\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171692 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-host-kubelet\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171757 4745 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171778 4745 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171891 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8whl8\" (UniqueName: \"kubernetes.io/projected/26b1987b-69bb-4768-a874-5a97b3327469-kube-api-access-8whl8\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171928 4745 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/26b1987b-69bb-4768-a874-5a97b3327469-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171947 4745 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/26b1987b-69bb-4768-a874-5a97b3327469-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171961 4745 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171973 4745 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.171986 4745 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.172003 4745 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.172019 4745 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.172037 4745 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.172055 4745 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/26b1987b-69bb-4768-a874-5a97b3327469-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.172071 4745 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.172087 4745 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26b1987b-69bb-4768-a874-5a97b3327469-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.201604 4745 scope.go:117] "RemoveContainer" containerID="93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.207528 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bnfh4"] Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.213539 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bnfh4"] Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.233711 4745 scope.go:117] "RemoveContainer" containerID="d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.245138 4745 scope.go:117] "RemoveContainer" containerID="3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.263297 4745 scope.go:117] "RemoveContainer" containerID="52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.280729 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-host-run-ovn-kubernetes\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.280787 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-run-systemd\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.280832 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0124f904-af6e-4db5-bd26-4a30bd6d5db7-env-overrides\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.280854 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-host-kubelet\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.280889 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-host-slash\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.280908 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0124f904-af6e-4db5-bd26-4a30bd6d5db7-ovnkube-script-lib\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.280930 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-etc-openvswitch\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.280954 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-node-log\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.280975 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-run-ovn\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.281004 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-host-run-netns\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.281025 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-host-cni-netd\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.281047 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-log-socket\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.281070 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0124f904-af6e-4db5-bd26-4a30bd6d5db7-ovn-node-metrics-cert\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.281095 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-run-openvswitch\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.281127 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-host-cni-bin\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.281153 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzvxt\" (UniqueName: \"kubernetes.io/projected/0124f904-af6e-4db5-bd26-4a30bd6d5db7-kube-api-access-kzvxt\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.281174 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0124f904-af6e-4db5-bd26-4a30bd6d5db7-ovnkube-config\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.281196 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-systemd-units\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.281219 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-var-lib-openvswitch\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.281244 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.281323 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.281372 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-etc-openvswitch\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.281401 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-node-log\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.281428 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-run-ovn\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.281454 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-host-run-netns\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.281481 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-host-cni-netd\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.281509 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-log-socket\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.281962 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0124f904-af6e-4db5-bd26-4a30bd6d5db7-ovnkube-script-lib\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.282038 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-host-run-ovn-kubernetes\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.282084 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-run-systemd\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.282493 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0124f904-af6e-4db5-bd26-4a30bd6d5db7-env-overrides\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.282543 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-host-kubelet\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.282573 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-host-slash\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.282888 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-run-openvswitch\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.282932 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-host-cni-bin\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.282965 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-systemd-units\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.283924 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0124f904-af6e-4db5-bd26-4a30bd6d5db7-var-lib-openvswitch\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.283967 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0124f904-af6e-4db5-bd26-4a30bd6d5db7-ovnkube-config\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.285604 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0124f904-af6e-4db5-bd26-4a30bd6d5db7-ovn-node-metrics-cert\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.286053 4745 scope.go:117] "RemoveContainer" containerID="d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.297743 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzvxt\" (UniqueName: \"kubernetes.io/projected/0124f904-af6e-4db5-bd26-4a30bd6d5db7-kube-api-access-kzvxt\") pod \"ovnkube-node-77dns\" (UID: \"0124f904-af6e-4db5-bd26-4a30bd6d5db7\") " pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.300944 4745 scope.go:117] "RemoveContainer" containerID="37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.312499 4745 scope.go:117] "RemoveContainer" containerID="2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.323089 4745 scope.go:117] "RemoveContainer" containerID="a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.333861 4745 scope.go:117] "RemoveContainer" containerID="60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.334264 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8\": container with ID starting with 60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8 not found: ID does not exist" containerID="60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.334299 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8"} err="failed to get container status \"60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8\": rpc error: code = NotFound desc = could not find container \"60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8\": container with ID starting with 60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.334320 4745 scope.go:117] "RemoveContainer" containerID="17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.334579 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627\": container with ID starting with 17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627 not found: ID does not exist" containerID="17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.334633 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627"} err="failed to get container status \"17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627\": rpc error: code = NotFound desc = could not find container \"17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627\": container with ID starting with 17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.334667 4745 scope.go:117] "RemoveContainer" containerID="93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.334976 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\": container with ID starting with 93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01 not found: ID does not exist" containerID="93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.335009 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01"} err="failed to get container status \"93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\": rpc error: code = NotFound desc = could not find container \"93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\": container with ID starting with 93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.335030 4745 scope.go:117] "RemoveContainer" containerID="d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.335334 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\": container with ID starting with d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1 not found: ID does not exist" containerID="d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.335359 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1"} err="failed to get container status \"d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\": rpc error: code = NotFound desc = could not find container \"d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\": container with ID starting with d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.335376 4745 scope.go:117] "RemoveContainer" containerID="3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.335624 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\": container with ID starting with 3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf not found: ID does not exist" containerID="3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.335653 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf"} err="failed to get container status \"3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\": rpc error: code = NotFound desc = could not find container \"3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\": container with ID starting with 3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.335670 4745 scope.go:117] "RemoveContainer" containerID="52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.335967 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\": container with ID starting with 52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51 not found: ID does not exist" containerID="52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.335998 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51"} err="failed to get container status \"52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\": rpc error: code = NotFound desc = could not find container \"52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\": container with ID starting with 52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.336018 4745 scope.go:117] "RemoveContainer" containerID="d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.336268 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\": container with ID starting with d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce not found: ID does not exist" containerID="d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.336299 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce"} err="failed to get container status \"d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\": rpc error: code = NotFound desc = could not find container \"d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\": container with ID starting with d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.336318 4745 scope.go:117] "RemoveContainer" containerID="37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.336566 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\": container with ID starting with 37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56 not found: ID does not exist" containerID="37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.336600 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56"} err="failed to get container status \"37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\": rpc error: code = NotFound desc = could not find container \"37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\": container with ID starting with 37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.336620 4745 scope.go:117] "RemoveContainer" containerID="2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.336864 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\": container with ID starting with 2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544 not found: ID does not exist" containerID="2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.336891 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544"} err="failed to get container status \"2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\": rpc error: code = NotFound desc = could not find container \"2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\": container with ID starting with 2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.336912 4745 scope.go:117] "RemoveContainer" containerID="a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb" Jan 27 12:25:00 crc kubenswrapper[4745]: E0127 12:25:00.337123 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\": container with ID starting with a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb not found: ID does not exist" containerID="a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.337149 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb"} err="failed to get container status \"a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\": rpc error: code = NotFound desc = could not find container \"a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\": container with ID starting with a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.337163 4745 scope.go:117] "RemoveContainer" containerID="60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.337341 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8"} err="failed to get container status \"60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8\": rpc error: code = NotFound desc = could not find container \"60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8\": container with ID starting with 60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.337368 4745 scope.go:117] "RemoveContainer" containerID="17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.337574 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627"} err="failed to get container status \"17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627\": rpc error: code = NotFound desc = could not find container \"17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627\": container with ID starting with 17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.337595 4745 scope.go:117] "RemoveContainer" containerID="93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.337794 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01"} err="failed to get container status \"93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\": rpc error: code = NotFound desc = could not find container \"93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\": container with ID starting with 93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.337832 4745 scope.go:117] "RemoveContainer" containerID="d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.338136 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1"} err="failed to get container status \"d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\": rpc error: code = NotFound desc = could not find container \"d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\": container with ID starting with d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.338164 4745 scope.go:117] "RemoveContainer" containerID="3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.338369 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf"} err="failed to get container status \"3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\": rpc error: code = NotFound desc = could not find container \"3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\": container with ID starting with 3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.338391 4745 scope.go:117] "RemoveContainer" containerID="52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.338591 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51"} err="failed to get container status \"52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\": rpc error: code = NotFound desc = could not find container \"52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\": container with ID starting with 52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.338614 4745 scope.go:117] "RemoveContainer" containerID="d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.338870 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce"} err="failed to get container status \"d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\": rpc error: code = NotFound desc = could not find container \"d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\": container with ID starting with d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.338892 4745 scope.go:117] "RemoveContainer" containerID="37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.339135 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56"} err="failed to get container status \"37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\": rpc error: code = NotFound desc = could not find container \"37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\": container with ID starting with 37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.339165 4745 scope.go:117] "RemoveContainer" containerID="2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.339407 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544"} err="failed to get container status \"2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\": rpc error: code = NotFound desc = could not find container \"2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\": container with ID starting with 2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.339431 4745 scope.go:117] "RemoveContainer" containerID="a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.339666 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb"} err="failed to get container status \"a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\": rpc error: code = NotFound desc = could not find container \"a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\": container with ID starting with a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.339691 4745 scope.go:117] "RemoveContainer" containerID="60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.339945 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8"} err="failed to get container status \"60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8\": rpc error: code = NotFound desc = could not find container \"60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8\": container with ID starting with 60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.339976 4745 scope.go:117] "RemoveContainer" containerID="17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.340229 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627"} err="failed to get container status \"17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627\": rpc error: code = NotFound desc = could not find container \"17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627\": container with ID starting with 17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.340257 4745 scope.go:117] "RemoveContainer" containerID="93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.340485 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01"} err="failed to get container status \"93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\": rpc error: code = NotFound desc = could not find container \"93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\": container with ID starting with 93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.340518 4745 scope.go:117] "RemoveContainer" containerID="d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.340726 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1"} err="failed to get container status \"d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\": rpc error: code = NotFound desc = could not find container \"d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\": container with ID starting with d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.340746 4745 scope.go:117] "RemoveContainer" containerID="3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.340977 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf"} err="failed to get container status \"3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\": rpc error: code = NotFound desc = could not find container \"3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\": container with ID starting with 3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.341047 4745 scope.go:117] "RemoveContainer" containerID="52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.341261 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51"} err="failed to get container status \"52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\": rpc error: code = NotFound desc = could not find container \"52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\": container with ID starting with 52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.341279 4745 scope.go:117] "RemoveContainer" containerID="d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.341447 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce"} err="failed to get container status \"d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\": rpc error: code = NotFound desc = could not find container \"d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\": container with ID starting with d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.341463 4745 scope.go:117] "RemoveContainer" containerID="37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.341658 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56"} err="failed to get container status \"37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\": rpc error: code = NotFound desc = could not find container \"37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\": container with ID starting with 37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.341684 4745 scope.go:117] "RemoveContainer" containerID="2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.341908 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544"} err="failed to get container status \"2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\": rpc error: code = NotFound desc = could not find container \"2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\": container with ID starting with 2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.341926 4745 scope.go:117] "RemoveContainer" containerID="a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.342115 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb"} err="failed to get container status \"a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\": rpc error: code = NotFound desc = could not find container \"a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\": container with ID starting with a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.342138 4745 scope.go:117] "RemoveContainer" containerID="60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.342324 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8"} err="failed to get container status \"60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8\": rpc error: code = NotFound desc = could not find container \"60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8\": container with ID starting with 60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.342338 4745 scope.go:117] "RemoveContainer" containerID="17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.342520 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627"} err="failed to get container status \"17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627\": rpc error: code = NotFound desc = could not find container \"17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627\": container with ID starting with 17b32feea8002b6b6f4dd3afcac9767d85acf62d70e4eddf196aa51b19328627 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.342541 4745 scope.go:117] "RemoveContainer" containerID="93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.342752 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01"} err="failed to get container status \"93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\": rpc error: code = NotFound desc = could not find container \"93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01\": container with ID starting with 93ffccbe8ab0bd9eff3bf7957b3014d96bfec33c31f3fa1ed16ea5ceb8c88e01 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.342766 4745 scope.go:117] "RemoveContainer" containerID="d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.342990 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1"} err="failed to get container status \"d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\": rpc error: code = NotFound desc = could not find container \"d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1\": container with ID starting with d5293416e993b30c969ccb9f5ffd30120dbf6f85dfabf1c2d438ed24b1827ae1 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.343016 4745 scope.go:117] "RemoveContainer" containerID="3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.343217 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf"} err="failed to get container status \"3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\": rpc error: code = NotFound desc = could not find container \"3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf\": container with ID starting with 3f34f382f5ba1017d9ff12729a26ddc1df25be48008682b8a01477f861f0bcdf not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.343233 4745 scope.go:117] "RemoveContainer" containerID="52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.343436 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51"} err="failed to get container status \"52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\": rpc error: code = NotFound desc = could not find container \"52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51\": container with ID starting with 52775d06a6fc5a8643ed593e24fd4bb26633aa2f49bf057087279f6168522b51 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.343459 4745 scope.go:117] "RemoveContainer" containerID="d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.343649 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce"} err="failed to get container status \"d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\": rpc error: code = NotFound desc = could not find container \"d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce\": container with ID starting with d255f289294b4e196570971389f8b9c2dd6c872e1cd7b3fc7bd9a7bf692025ce not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.343668 4745 scope.go:117] "RemoveContainer" containerID="37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.343906 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56"} err="failed to get container status \"37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\": rpc error: code = NotFound desc = could not find container \"37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56\": container with ID starting with 37bb25c8afbd7c6c7ab745b54d69e361e7703c6b86a48341b9ee607adc6b2a56 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.343930 4745 scope.go:117] "RemoveContainer" containerID="2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.344133 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544"} err="failed to get container status \"2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\": rpc error: code = NotFound desc = could not find container \"2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544\": container with ID starting with 2c7b9d1889e0bb4705315deb3ac995959c5c9057a89c46afc259e5b1a41c8544 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.344156 4745 scope.go:117] "RemoveContainer" containerID="a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.344394 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb"} err="failed to get container status \"a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\": rpc error: code = NotFound desc = could not find container \"a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb\": container with ID starting with a829436a425064956a4bcc37bb308a700438f359b5ff8375b563598a36b54deb not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.344417 4745 scope.go:117] "RemoveContainer" containerID="60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.344614 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8"} err="failed to get container status \"60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8\": rpc error: code = NotFound desc = could not find container \"60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8\": container with ID starting with 60e104b2789e3766bf2806969e0c586c5179dc6a43b3ab0874464310f234dec8 not found: ID does not exist" Jan 27 12:25:00 crc kubenswrapper[4745]: I0127 12:25:00.455070 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:00 crc kubenswrapper[4745]: W0127 12:25:00.476336 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0124f904_af6e_4db5_bd26_4a30bd6d5db7.slice/crio-38841728144e99a89a9a97a947e8f1ec05753b75b2bfb9fe1dfacebace884672 WatchSource:0}: Error finding container 38841728144e99a89a9a97a947e8f1ec05753b75b2bfb9fe1dfacebace884672: Status 404 returned error can't find the container with id 38841728144e99a89a9a97a947e8f1ec05753b75b2bfb9fe1dfacebace884672 Jan 27 12:25:01 crc kubenswrapper[4745]: I0127 12:25:01.139414 4745 generic.go:334] "Generic (PLEG): container finished" podID="0124f904-af6e-4db5-bd26-4a30bd6d5db7" containerID="81592a7f5f76170ecf97ee4075c03b54e59aa7e68adb4f6f23e658d0531f902d" exitCode=0 Jan 27 12:25:01 crc kubenswrapper[4745]: I0127 12:25:01.139651 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-77dns" event={"ID":"0124f904-af6e-4db5-bd26-4a30bd6d5db7","Type":"ContainerDied","Data":"81592a7f5f76170ecf97ee4075c03b54e59aa7e68adb4f6f23e658d0531f902d"} Jan 27 12:25:01 crc kubenswrapper[4745]: I0127 12:25:01.139841 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-77dns" event={"ID":"0124f904-af6e-4db5-bd26-4a30bd6d5db7","Type":"ContainerStarted","Data":"38841728144e99a89a9a97a947e8f1ec05753b75b2bfb9fe1dfacebace884672"} Jan 27 12:25:01 crc kubenswrapper[4745]: I0127 12:25:01.142124 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-97hlh_c438e876-f4c1-42ca-b935-b5e58be9cfb2/kube-multus/2.log" Jan 27 12:25:01 crc kubenswrapper[4745]: I0127 12:25:01.142659 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-97hlh_c438e876-f4c1-42ca-b935-b5e58be9cfb2/kube-multus/1.log" Jan 27 12:25:01 crc kubenswrapper[4745]: I0127 12:25:01.142729 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-97hlh" event={"ID":"c438e876-f4c1-42ca-b935-b5e58be9cfb2","Type":"ContainerStarted","Data":"50948c466414400992cc3717cb8a8fdf2f2fe2432290c243ad6114e830571a6a"} Jan 27 12:25:02 crc kubenswrapper[4745]: I0127 12:25:02.085320 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26b1987b-69bb-4768-a874-5a97b3327469" path="/var/lib/kubelet/pods/26b1987b-69bb-4768-a874-5a97b3327469/volumes" Jan 27 12:25:03 crc kubenswrapper[4745]: I0127 12:25:03.160498 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-77dns" event={"ID":"0124f904-af6e-4db5-bd26-4a30bd6d5db7","Type":"ContainerStarted","Data":"7db3e49ce07e8b0fde8748ba060d8c99c79013ce596b8468715bd417114e9492"} Jan 27 12:25:03 crc kubenswrapper[4745]: I0127 12:25:03.160892 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-77dns" event={"ID":"0124f904-af6e-4db5-bd26-4a30bd6d5db7","Type":"ContainerStarted","Data":"96e8a08e5e0420fd68651009ad6d9b5fb00b1a2ee07388e81f3809279424dd05"} Jan 27 12:25:03 crc kubenswrapper[4745]: I0127 12:25:03.160919 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-77dns" event={"ID":"0124f904-af6e-4db5-bd26-4a30bd6d5db7","Type":"ContainerStarted","Data":"6e7a031210b5770799b32dc3aa09d0ed052b262cd382e53b5ae8afa706c690e6"} Jan 27 12:25:04 crc kubenswrapper[4745]: I0127 12:25:04.169477 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-77dns" event={"ID":"0124f904-af6e-4db5-bd26-4a30bd6d5db7","Type":"ContainerStarted","Data":"e0f3e01ac9176cbc048c714f4dcde544f62fc40b87b3025b684ffdecfc4dedd4"} Jan 27 12:25:04 crc kubenswrapper[4745]: I0127 12:25:04.169531 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-77dns" event={"ID":"0124f904-af6e-4db5-bd26-4a30bd6d5db7","Type":"ContainerStarted","Data":"73cfc2efe0c82a39abc13acaf6bd5a0faae62dfe9965221527cd7417dcd31e9f"} Jan 27 12:25:04 crc kubenswrapper[4745]: I0127 12:25:04.169551 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-77dns" event={"ID":"0124f904-af6e-4db5-bd26-4a30bd6d5db7","Type":"ContainerStarted","Data":"c84fc5a15caab982a62724190a7c248ce3254a1c565ae05e2315fada3390be3e"} Jan 27 12:25:04 crc kubenswrapper[4745]: I0127 12:25:04.895384 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-jr2pj" Jan 27 12:25:06 crc kubenswrapper[4745]: I0127 12:25:06.185769 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-77dns" event={"ID":"0124f904-af6e-4db5-bd26-4a30bd6d5db7","Type":"ContainerStarted","Data":"24191286621fa47db3a28e9c962880ec904c8adbb47da0c029dd9ee37563c2d8"} Jan 27 12:25:09 crc kubenswrapper[4745]: I0127 12:25:09.206577 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-77dns" event={"ID":"0124f904-af6e-4db5-bd26-4a30bd6d5db7","Type":"ContainerStarted","Data":"efeffa2d80dad59bfaeff1e46d13e3fe49c41a19b164b5b2e74fa6018c03ab88"} Jan 27 12:25:09 crc kubenswrapper[4745]: I0127 12:25:09.207194 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:09 crc kubenswrapper[4745]: I0127 12:25:09.239743 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:09 crc kubenswrapper[4745]: I0127 12:25:09.245042 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-77dns" podStartSLOduration=9.242790965 podStartE2EDuration="9.242790965s" podCreationTimestamp="2026-01-27 12:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:25:09.238391015 +0000 UTC m=+802.043301723" watchObservedRunningTime="2026-01-27 12:25:09.242790965 +0000 UTC m=+802.047701653" Jan 27 12:25:10 crc kubenswrapper[4745]: I0127 12:25:10.212132 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:10 crc kubenswrapper[4745]: I0127 12:25:10.212171 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:10 crc kubenswrapper[4745]: I0127 12:25:10.248582 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:30 crc kubenswrapper[4745]: I0127 12:25:30.480327 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-77dns" Jan 27 12:25:35 crc kubenswrapper[4745]: I0127 12:25:35.967169 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:25:35 crc kubenswrapper[4745]: I0127 12:25:35.967712 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:25:38 crc kubenswrapper[4745]: I0127 12:25:38.916580 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk"] Jan 27 12:25:38 crc kubenswrapper[4745]: I0127 12:25:38.918114 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk" Jan 27 12:25:38 crc kubenswrapper[4745]: I0127 12:25:38.919990 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 12:25:38 crc kubenswrapper[4745]: I0127 12:25:38.926617 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk"] Jan 27 12:25:39 crc kubenswrapper[4745]: I0127 12:25:39.092128 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b021ec5-0cae-448d-a9da-72a4f4e4ddf7-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk\" (UID: \"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk" Jan 27 12:25:39 crc kubenswrapper[4745]: I0127 12:25:39.092254 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b021ec5-0cae-448d-a9da-72a4f4e4ddf7-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk\" (UID: \"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk" Jan 27 12:25:39 crc kubenswrapper[4745]: I0127 12:25:39.092298 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqg6n\" (UniqueName: \"kubernetes.io/projected/0b021ec5-0cae-448d-a9da-72a4f4e4ddf7-kube-api-access-gqg6n\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk\" (UID: \"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk" Jan 27 12:25:39 crc kubenswrapper[4745]: I0127 12:25:39.193263 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b021ec5-0cae-448d-a9da-72a4f4e4ddf7-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk\" (UID: \"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk" Jan 27 12:25:39 crc kubenswrapper[4745]: I0127 12:25:39.193375 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b021ec5-0cae-448d-a9da-72a4f4e4ddf7-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk\" (UID: \"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk" Jan 27 12:25:39 crc kubenswrapper[4745]: I0127 12:25:39.193420 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqg6n\" (UniqueName: \"kubernetes.io/projected/0b021ec5-0cae-448d-a9da-72a4f4e4ddf7-kube-api-access-gqg6n\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk\" (UID: \"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk" Jan 27 12:25:39 crc kubenswrapper[4745]: I0127 12:25:39.194155 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b021ec5-0cae-448d-a9da-72a4f4e4ddf7-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk\" (UID: \"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk" Jan 27 12:25:39 crc kubenswrapper[4745]: I0127 12:25:39.194292 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b021ec5-0cae-448d-a9da-72a4f4e4ddf7-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk\" (UID: \"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk" Jan 27 12:25:39 crc kubenswrapper[4745]: I0127 12:25:39.225070 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqg6n\" (UniqueName: \"kubernetes.io/projected/0b021ec5-0cae-448d-a9da-72a4f4e4ddf7-kube-api-access-gqg6n\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk\" (UID: \"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk" Jan 27 12:25:39 crc kubenswrapper[4745]: I0127 12:25:39.235229 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk" Jan 27 12:25:39 crc kubenswrapper[4745]: I0127 12:25:39.427996 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk"] Jan 27 12:25:40 crc kubenswrapper[4745]: I0127 12:25:40.378181 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk" event={"ID":"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7","Type":"ContainerStarted","Data":"930aac40fec4a385c6e2d30b79c3b4fcbdfda4e4c095fb698d65abee52e8887a"} Jan 27 12:25:40 crc kubenswrapper[4745]: I0127 12:25:40.379452 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk" event={"ID":"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7","Type":"ContainerStarted","Data":"33bc0f46d06e28e35ebcb444a182ab2e3a0129c8d1a9fe225da7a4a65103c522"} Jan 27 12:25:41 crc kubenswrapper[4745]: I0127 12:25:41.240897 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pfv8j"] Jan 27 12:25:41 crc kubenswrapper[4745]: I0127 12:25:41.242288 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pfv8j" Jan 27 12:25:41 crc kubenswrapper[4745]: I0127 12:25:41.251394 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pfv8j"] Jan 27 12:25:41 crc kubenswrapper[4745]: I0127 12:25:41.321125 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvjgc\" (UniqueName: \"kubernetes.io/projected/f573ce6f-7377-4608-8cee-357a1e7b066a-kube-api-access-xvjgc\") pod \"redhat-operators-pfv8j\" (UID: \"f573ce6f-7377-4608-8cee-357a1e7b066a\") " pod="openshift-marketplace/redhat-operators-pfv8j" Jan 27 12:25:41 crc kubenswrapper[4745]: I0127 12:25:41.321198 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f573ce6f-7377-4608-8cee-357a1e7b066a-utilities\") pod \"redhat-operators-pfv8j\" (UID: \"f573ce6f-7377-4608-8cee-357a1e7b066a\") " pod="openshift-marketplace/redhat-operators-pfv8j" Jan 27 12:25:41 crc kubenswrapper[4745]: I0127 12:25:41.321235 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f573ce6f-7377-4608-8cee-357a1e7b066a-catalog-content\") pod \"redhat-operators-pfv8j\" (UID: \"f573ce6f-7377-4608-8cee-357a1e7b066a\") " pod="openshift-marketplace/redhat-operators-pfv8j" Jan 27 12:25:41 crc kubenswrapper[4745]: I0127 12:25:41.387044 4745 generic.go:334] "Generic (PLEG): container finished" podID="0b021ec5-0cae-448d-a9da-72a4f4e4ddf7" containerID="930aac40fec4a385c6e2d30b79c3b4fcbdfda4e4c095fb698d65abee52e8887a" exitCode=0 Jan 27 12:25:41 crc kubenswrapper[4745]: I0127 12:25:41.387092 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk" event={"ID":"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7","Type":"ContainerDied","Data":"930aac40fec4a385c6e2d30b79c3b4fcbdfda4e4c095fb698d65abee52e8887a"} Jan 27 12:25:41 crc kubenswrapper[4745]: I0127 12:25:41.422187 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvjgc\" (UniqueName: \"kubernetes.io/projected/f573ce6f-7377-4608-8cee-357a1e7b066a-kube-api-access-xvjgc\") pod \"redhat-operators-pfv8j\" (UID: \"f573ce6f-7377-4608-8cee-357a1e7b066a\") " pod="openshift-marketplace/redhat-operators-pfv8j" Jan 27 12:25:41 crc kubenswrapper[4745]: I0127 12:25:41.422261 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f573ce6f-7377-4608-8cee-357a1e7b066a-utilities\") pod \"redhat-operators-pfv8j\" (UID: \"f573ce6f-7377-4608-8cee-357a1e7b066a\") " pod="openshift-marketplace/redhat-operators-pfv8j" Jan 27 12:25:41 crc kubenswrapper[4745]: I0127 12:25:41.422323 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f573ce6f-7377-4608-8cee-357a1e7b066a-catalog-content\") pod \"redhat-operators-pfv8j\" (UID: \"f573ce6f-7377-4608-8cee-357a1e7b066a\") " pod="openshift-marketplace/redhat-operators-pfv8j" Jan 27 12:25:41 crc kubenswrapper[4745]: I0127 12:25:41.422721 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f573ce6f-7377-4608-8cee-357a1e7b066a-catalog-content\") pod \"redhat-operators-pfv8j\" (UID: \"f573ce6f-7377-4608-8cee-357a1e7b066a\") " pod="openshift-marketplace/redhat-operators-pfv8j" Jan 27 12:25:41 crc kubenswrapper[4745]: I0127 12:25:41.422782 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f573ce6f-7377-4608-8cee-357a1e7b066a-utilities\") pod \"redhat-operators-pfv8j\" (UID: \"f573ce6f-7377-4608-8cee-357a1e7b066a\") " pod="openshift-marketplace/redhat-operators-pfv8j" Jan 27 12:25:41 crc kubenswrapper[4745]: I0127 12:25:41.443130 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvjgc\" (UniqueName: \"kubernetes.io/projected/f573ce6f-7377-4608-8cee-357a1e7b066a-kube-api-access-xvjgc\") pod \"redhat-operators-pfv8j\" (UID: \"f573ce6f-7377-4608-8cee-357a1e7b066a\") " pod="openshift-marketplace/redhat-operators-pfv8j" Jan 27 12:25:41 crc kubenswrapper[4745]: I0127 12:25:41.566217 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pfv8j" Jan 27 12:25:41 crc kubenswrapper[4745]: I0127 12:25:41.763581 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pfv8j"] Jan 27 12:25:42 crc kubenswrapper[4745]: I0127 12:25:42.393580 4745 generic.go:334] "Generic (PLEG): container finished" podID="f573ce6f-7377-4608-8cee-357a1e7b066a" containerID="5c52caa53719eb420556f482b28c81a26d02fb0bb84c7b50a447401a5fc56727" exitCode=0 Jan 27 12:25:42 crc kubenswrapper[4745]: I0127 12:25:42.393629 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfv8j" event={"ID":"f573ce6f-7377-4608-8cee-357a1e7b066a","Type":"ContainerDied","Data":"5c52caa53719eb420556f482b28c81a26d02fb0bb84c7b50a447401a5fc56727"} Jan 27 12:25:42 crc kubenswrapper[4745]: I0127 12:25:42.393657 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfv8j" event={"ID":"f573ce6f-7377-4608-8cee-357a1e7b066a","Type":"ContainerStarted","Data":"6a93f026ecb96d0c992579093d4085eb035525775005deb4ac5ff7987b5c4963"} Jan 27 12:25:45 crc kubenswrapper[4745]: I0127 12:25:45.411483 4745 generic.go:334] "Generic (PLEG): container finished" podID="f573ce6f-7377-4608-8cee-357a1e7b066a" containerID="e62a072aa9d1db29fd0d98ba34a6ef27be11b01d788fd0f1ccb79f487fef5572" exitCode=0 Jan 27 12:25:45 crc kubenswrapper[4745]: I0127 12:25:45.412016 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfv8j" event={"ID":"f573ce6f-7377-4608-8cee-357a1e7b066a","Type":"ContainerDied","Data":"e62a072aa9d1db29fd0d98ba34a6ef27be11b01d788fd0f1ccb79f487fef5572"} Jan 27 12:25:45 crc kubenswrapper[4745]: I0127 12:25:45.415602 4745 generic.go:334] "Generic (PLEG): container finished" podID="0b021ec5-0cae-448d-a9da-72a4f4e4ddf7" containerID="59d1b6dbd07e1dbcf2aff6f1378f7671691238bfc725a7b492dff1c44679e324" exitCode=0 Jan 27 12:25:45 crc kubenswrapper[4745]: I0127 12:25:45.415667 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk" event={"ID":"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7","Type":"ContainerDied","Data":"59d1b6dbd07e1dbcf2aff6f1378f7671691238bfc725a7b492dff1c44679e324"} Jan 27 12:25:46 crc kubenswrapper[4745]: I0127 12:25:46.426449 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk" event={"ID":"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7","Type":"ContainerStarted","Data":"daeaa4de790cad2d58faa4095e5ab17fffc4dcb3067d0093eea6f2be8d4dc061"} Jan 27 12:25:47 crc kubenswrapper[4745]: I0127 12:25:47.441066 4745 generic.go:334] "Generic (PLEG): container finished" podID="0b021ec5-0cae-448d-a9da-72a4f4e4ddf7" containerID="daeaa4de790cad2d58faa4095e5ab17fffc4dcb3067d0093eea6f2be8d4dc061" exitCode=0 Jan 27 12:25:47 crc kubenswrapper[4745]: I0127 12:25:47.441292 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk" event={"ID":"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7","Type":"ContainerDied","Data":"daeaa4de790cad2d58faa4095e5ab17fffc4dcb3067d0093eea6f2be8d4dc061"} Jan 27 12:25:47 crc kubenswrapper[4745]: I0127 12:25:47.446948 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfv8j" event={"ID":"f573ce6f-7377-4608-8cee-357a1e7b066a","Type":"ContainerStarted","Data":"ec3ba7372c883c582b4930b7a5e0c24f808281d1ddbcdee170c0fdf45edf4585"} Jan 27 12:25:48 crc kubenswrapper[4745]: I0127 12:25:48.479758 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pfv8j" podStartSLOduration=2.785871601 podStartE2EDuration="7.479735135s" podCreationTimestamp="2026-01-27 12:25:41 +0000 UTC" firstStartedPulling="2026-01-27 12:25:42.394833149 +0000 UTC m=+835.199743837" lastFinishedPulling="2026-01-27 12:25:47.088696643 +0000 UTC m=+839.893607371" observedRunningTime="2026-01-27 12:25:48.474979898 +0000 UTC m=+841.279890586" watchObservedRunningTime="2026-01-27 12:25:48.479735135 +0000 UTC m=+841.284645823" Jan 27 12:25:48 crc kubenswrapper[4745]: I0127 12:25:48.703950 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk" Jan 27 12:25:48 crc kubenswrapper[4745]: I0127 12:25:48.727770 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqg6n\" (UniqueName: \"kubernetes.io/projected/0b021ec5-0cae-448d-a9da-72a4f4e4ddf7-kube-api-access-gqg6n\") pod \"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7\" (UID: \"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7\") " Jan 27 12:25:48 crc kubenswrapper[4745]: I0127 12:25:48.727872 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b021ec5-0cae-448d-a9da-72a4f4e4ddf7-util\") pod \"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7\" (UID: \"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7\") " Jan 27 12:25:48 crc kubenswrapper[4745]: I0127 12:25:48.727977 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b021ec5-0cae-448d-a9da-72a4f4e4ddf7-bundle\") pod \"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7\" (UID: \"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7\") " Jan 27 12:25:48 crc kubenswrapper[4745]: I0127 12:25:48.733026 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b021ec5-0cae-448d-a9da-72a4f4e4ddf7-bundle" (OuterVolumeSpecName: "bundle") pod "0b021ec5-0cae-448d-a9da-72a4f4e4ddf7" (UID: "0b021ec5-0cae-448d-a9da-72a4f4e4ddf7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:25:48 crc kubenswrapper[4745]: I0127 12:25:48.736046 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b021ec5-0cae-448d-a9da-72a4f4e4ddf7-kube-api-access-gqg6n" (OuterVolumeSpecName: "kube-api-access-gqg6n") pod "0b021ec5-0cae-448d-a9da-72a4f4e4ddf7" (UID: "0b021ec5-0cae-448d-a9da-72a4f4e4ddf7"). InnerVolumeSpecName "kube-api-access-gqg6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:25:48 crc kubenswrapper[4745]: I0127 12:25:48.741309 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b021ec5-0cae-448d-a9da-72a4f4e4ddf7-util" (OuterVolumeSpecName: "util") pod "0b021ec5-0cae-448d-a9da-72a4f4e4ddf7" (UID: "0b021ec5-0cae-448d-a9da-72a4f4e4ddf7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:25:48 crc kubenswrapper[4745]: I0127 12:25:48.830185 4745 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b021ec5-0cae-448d-a9da-72a4f4e4ddf7-util\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:48 crc kubenswrapper[4745]: I0127 12:25:48.830275 4745 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b021ec5-0cae-448d-a9da-72a4f4e4ddf7-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:48 crc kubenswrapper[4745]: I0127 12:25:48.830288 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqg6n\" (UniqueName: \"kubernetes.io/projected/0b021ec5-0cae-448d-a9da-72a4f4e4ddf7-kube-api-access-gqg6n\") on node \"crc\" DevicePath \"\"" Jan 27 12:25:49 crc kubenswrapper[4745]: I0127 12:25:49.462481 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk" event={"ID":"0b021ec5-0cae-448d-a9da-72a4f4e4ddf7","Type":"ContainerDied","Data":"33bc0f46d06e28e35ebcb444a182ab2e3a0129c8d1a9fe225da7a4a65103c522"} Jan 27 12:25:49 crc kubenswrapper[4745]: I0127 12:25:49.462548 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33bc0f46d06e28e35ebcb444a182ab2e3a0129c8d1a9fe225da7a4a65103c522" Jan 27 12:25:49 crc kubenswrapper[4745]: I0127 12:25:49.462528 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk" Jan 27 12:25:51 crc kubenswrapper[4745]: I0127 12:25:51.567127 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pfv8j" Jan 27 12:25:51 crc kubenswrapper[4745]: I0127 12:25:51.567298 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pfv8j" Jan 27 12:25:52 crc kubenswrapper[4745]: I0127 12:25:52.609631 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pfv8j" podUID="f573ce6f-7377-4608-8cee-357a1e7b066a" containerName="registry-server" probeResult="failure" output=< Jan 27 12:25:52 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 27 12:25:52 crc kubenswrapper[4745]: > Jan 27 12:25:55 crc kubenswrapper[4745]: I0127 12:25:55.753465 4745 scope.go:117] "RemoveContainer" containerID="baa0c592826e1a106ac51a6142fa5d252f837bafe03f4cdc94832dbb4704e9f3" Jan 27 12:25:56 crc kubenswrapper[4745]: I0127 12:25:56.510606 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-97hlh_c438e876-f4c1-42ca-b935-b5e58be9cfb2/kube-multus/2.log" Jan 27 12:26:02 crc kubenswrapper[4745]: I0127 12:26:02.000244 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pfv8j" Jan 27 12:26:02 crc kubenswrapper[4745]: I0127 12:26:02.384496 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pfv8j" Jan 27 12:26:03 crc kubenswrapper[4745]: I0127 12:26:03.048757 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pfv8j"] Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.031911 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pfv8j" podUID="f573ce6f-7377-4608-8cee-357a1e7b066a" containerName="registry-server" containerID="cri-o://ec3ba7372c883c582b4930b7a5e0c24f808281d1ddbcdee170c0fdf45edf4585" gracePeriod=2 Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.543895 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-lvn25"] Jan 27 12:26:04 crc kubenswrapper[4745]: E0127 12:26:04.544206 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b021ec5-0cae-448d-a9da-72a4f4e4ddf7" containerName="extract" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.544226 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b021ec5-0cae-448d-a9da-72a4f4e4ddf7" containerName="extract" Jan 27 12:26:04 crc kubenswrapper[4745]: E0127 12:26:04.544246 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b021ec5-0cae-448d-a9da-72a4f4e4ddf7" containerName="util" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.544260 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b021ec5-0cae-448d-a9da-72a4f4e4ddf7" containerName="util" Jan 27 12:26:04 crc kubenswrapper[4745]: E0127 12:26:04.544271 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b021ec5-0cae-448d-a9da-72a4f4e4ddf7" containerName="pull" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.544280 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b021ec5-0cae-448d-a9da-72a4f4e4ddf7" containerName="pull" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.544409 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b021ec5-0cae-448d-a9da-72a4f4e4ddf7" containerName="extract" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.545036 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lvn25" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.547702 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-x8j5v" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.548058 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.548254 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.633789 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-lvn25"] Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.680336 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-qb7l2"] Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.685614 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-qb7l2" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.688535 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-8ngjt" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.688711 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.716669 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-nr9lp"] Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.717114 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cc6dbeab-7b0b-4924-a7d9-b1a27b0740cd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-84d7487956-qb7l2\" (UID: \"cc6dbeab-7b0b-4924-a7d9-b1a27b0740cd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-qb7l2" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.717187 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkbzp\" (UniqueName: \"kubernetes.io/projected/34e4a875-3a3e-43ea-9092-887c194579c5-kube-api-access-xkbzp\") pod \"obo-prometheus-operator-68bc856cb9-lvn25\" (UID: \"34e4a875-3a3e-43ea-9092-887c194579c5\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lvn25" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.717238 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cc6dbeab-7b0b-4924-a7d9-b1a27b0740cd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-84d7487956-qb7l2\" (UID: \"cc6dbeab-7b0b-4924-a7d9-b1a27b0740cd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-qb7l2" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.739334 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-qb7l2"] Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.739379 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-nr9lp"] Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.739582 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-nr9lp" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.841324 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cc6dbeab-7b0b-4924-a7d9-b1a27b0740cd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-84d7487956-qb7l2\" (UID: \"cc6dbeab-7b0b-4924-a7d9-b1a27b0740cd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-qb7l2" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.841428 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a76fb56b-8fc4-48d9-a356-b1e369938f0f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-84d7487956-nr9lp\" (UID: \"a76fb56b-8fc4-48d9-a356-b1e369938f0f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-nr9lp" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.841473 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a76fb56b-8fc4-48d9-a356-b1e369938f0f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-84d7487956-nr9lp\" (UID: \"a76fb56b-8fc4-48d9-a356-b1e369938f0f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-nr9lp" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.841511 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkbzp\" (UniqueName: \"kubernetes.io/projected/34e4a875-3a3e-43ea-9092-887c194579c5-kube-api-access-xkbzp\") pod \"obo-prometheus-operator-68bc856cb9-lvn25\" (UID: \"34e4a875-3a3e-43ea-9092-887c194579c5\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lvn25" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.841697 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cc6dbeab-7b0b-4924-a7d9-b1a27b0740cd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-84d7487956-qb7l2\" (UID: \"cc6dbeab-7b0b-4924-a7d9-b1a27b0740cd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-qb7l2" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.942789 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a76fb56b-8fc4-48d9-a356-b1e369938f0f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-84d7487956-nr9lp\" (UID: \"a76fb56b-8fc4-48d9-a356-b1e369938f0f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-nr9lp" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.942880 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a76fb56b-8fc4-48d9-a356-b1e369938f0f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-84d7487956-nr9lp\" (UID: \"a76fb56b-8fc4-48d9-a356-b1e369938f0f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-nr9lp" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.949975 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a76fb56b-8fc4-48d9-a356-b1e369938f0f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-84d7487956-nr9lp\" (UID: \"a76fb56b-8fc4-48d9-a356-b1e369938f0f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-nr9lp" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.957216 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cc6dbeab-7b0b-4924-a7d9-b1a27b0740cd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-84d7487956-qb7l2\" (UID: \"cc6dbeab-7b0b-4924-a7d9-b1a27b0740cd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-qb7l2" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.964390 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cc6dbeab-7b0b-4924-a7d9-b1a27b0740cd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-84d7487956-qb7l2\" (UID: \"cc6dbeab-7b0b-4924-a7d9-b1a27b0740cd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-qb7l2" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.964667 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a76fb56b-8fc4-48d9-a356-b1e369938f0f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-84d7487956-nr9lp\" (UID: \"a76fb56b-8fc4-48d9-a356-b1e369938f0f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-nr9lp" Jan 27 12:26:04 crc kubenswrapper[4745]: I0127 12:26:04.979266 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkbzp\" (UniqueName: \"kubernetes.io/projected/34e4a875-3a3e-43ea-9092-887c194579c5-kube-api-access-xkbzp\") pod \"obo-prometheus-operator-68bc856cb9-lvn25\" (UID: \"34e4a875-3a3e-43ea-9092-887c194579c5\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lvn25" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.000899 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-78f4x"] Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.001786 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-78f4x" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.003887 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.004117 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-pv7rb" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.018489 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-78f4x"] Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.035031 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-qb7l2" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.056565 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2fc8aa52-b047-4344-b175-e5b58f406459-observability-operator-tls\") pod \"observability-operator-59bdc8b94-78f4x\" (UID: \"2fc8aa52-b047-4344-b175-e5b58f406459\") " pod="openshift-operators/observability-operator-59bdc8b94-78f4x" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.056765 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55b2r\" (UniqueName: \"kubernetes.io/projected/2fc8aa52-b047-4344-b175-e5b58f406459-kube-api-access-55b2r\") pod \"observability-operator-59bdc8b94-78f4x\" (UID: \"2fc8aa52-b047-4344-b175-e5b58f406459\") " pod="openshift-operators/observability-operator-59bdc8b94-78f4x" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.060222 4745 generic.go:334] "Generic (PLEG): container finished" podID="f573ce6f-7377-4608-8cee-357a1e7b066a" containerID="ec3ba7372c883c582b4930b7a5e0c24f808281d1ddbcdee170c0fdf45edf4585" exitCode=0 Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.060258 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfv8j" event={"ID":"f573ce6f-7377-4608-8cee-357a1e7b066a","Type":"ContainerDied","Data":"ec3ba7372c883c582b4930b7a5e0c24f808281d1ddbcdee170c0fdf45edf4585"} Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.065575 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-nr9lp" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.158009 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2fc8aa52-b047-4344-b175-e5b58f406459-observability-operator-tls\") pod \"observability-operator-59bdc8b94-78f4x\" (UID: \"2fc8aa52-b047-4344-b175-e5b58f406459\") " pod="openshift-operators/observability-operator-59bdc8b94-78f4x" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.159016 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55b2r\" (UniqueName: \"kubernetes.io/projected/2fc8aa52-b047-4344-b175-e5b58f406459-kube-api-access-55b2r\") pod \"observability-operator-59bdc8b94-78f4x\" (UID: \"2fc8aa52-b047-4344-b175-e5b58f406459\") " pod="openshift-operators/observability-operator-59bdc8b94-78f4x" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.162690 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2fc8aa52-b047-4344-b175-e5b58f406459-observability-operator-tls\") pod \"observability-operator-59bdc8b94-78f4x\" (UID: \"2fc8aa52-b047-4344-b175-e5b58f406459\") " pod="openshift-operators/observability-operator-59bdc8b94-78f4x" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.164526 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lvn25" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.176301 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55b2r\" (UniqueName: \"kubernetes.io/projected/2fc8aa52-b047-4344-b175-e5b58f406459-kube-api-access-55b2r\") pod \"observability-operator-59bdc8b94-78f4x\" (UID: \"2fc8aa52-b047-4344-b175-e5b58f406459\") " pod="openshift-operators/observability-operator-59bdc8b94-78f4x" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.207595 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-wzghm"] Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.208303 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-wzghm" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.217514 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-l58qq" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.217777 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-wzghm"] Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.262662 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hmzb\" (UniqueName: \"kubernetes.io/projected/7d8f40f3-b76c-4b87-8b96-dbb564ce0b8d-kube-api-access-2hmzb\") pod \"perses-operator-5bf474d74f-wzghm\" (UID: \"7d8f40f3-b76c-4b87-8b96-dbb564ce0b8d\") " pod="openshift-operators/perses-operator-5bf474d74f-wzghm" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.262725 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7d8f40f3-b76c-4b87-8b96-dbb564ce0b8d-openshift-service-ca\") pod \"perses-operator-5bf474d74f-wzghm\" (UID: \"7d8f40f3-b76c-4b87-8b96-dbb564ce0b8d\") " pod="openshift-operators/perses-operator-5bf474d74f-wzghm" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.327558 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-78f4x" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.364883 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7d8f40f3-b76c-4b87-8b96-dbb564ce0b8d-openshift-service-ca\") pod \"perses-operator-5bf474d74f-wzghm\" (UID: \"7d8f40f3-b76c-4b87-8b96-dbb564ce0b8d\") " pod="openshift-operators/perses-operator-5bf474d74f-wzghm" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.365111 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hmzb\" (UniqueName: \"kubernetes.io/projected/7d8f40f3-b76c-4b87-8b96-dbb564ce0b8d-kube-api-access-2hmzb\") pod \"perses-operator-5bf474d74f-wzghm\" (UID: \"7d8f40f3-b76c-4b87-8b96-dbb564ce0b8d\") " pod="openshift-operators/perses-operator-5bf474d74f-wzghm" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.366638 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7d8f40f3-b76c-4b87-8b96-dbb564ce0b8d-openshift-service-ca\") pod \"perses-operator-5bf474d74f-wzghm\" (UID: \"7d8f40f3-b76c-4b87-8b96-dbb564ce0b8d\") " pod="openshift-operators/perses-operator-5bf474d74f-wzghm" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.393053 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hmzb\" (UniqueName: \"kubernetes.io/projected/7d8f40f3-b76c-4b87-8b96-dbb564ce0b8d-kube-api-access-2hmzb\") pod \"perses-operator-5bf474d74f-wzghm\" (UID: \"7d8f40f3-b76c-4b87-8b96-dbb564ce0b8d\") " pod="openshift-operators/perses-operator-5bf474d74f-wzghm" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.572139 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-wzghm" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.978206 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.978802 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:26:05 crc kubenswrapper[4745]: I0127 12:26:05.978244 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pfv8j" Jan 27 12:26:06 crc kubenswrapper[4745]: I0127 12:26:06.070898 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfv8j" event={"ID":"f573ce6f-7377-4608-8cee-357a1e7b066a","Type":"ContainerDied","Data":"6a93f026ecb96d0c992579093d4085eb035525775005deb4ac5ff7987b5c4963"} Jan 27 12:26:06 crc kubenswrapper[4745]: I0127 12:26:06.071278 4745 scope.go:117] "RemoveContainer" containerID="ec3ba7372c883c582b4930b7a5e0c24f808281d1ddbcdee170c0fdf45edf4585" Jan 27 12:26:06 crc kubenswrapper[4745]: I0127 12:26:06.070963 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pfv8j" Jan 27 12:26:06 crc kubenswrapper[4745]: I0127 12:26:06.088511 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvjgc\" (UniqueName: \"kubernetes.io/projected/f573ce6f-7377-4608-8cee-357a1e7b066a-kube-api-access-xvjgc\") pod \"f573ce6f-7377-4608-8cee-357a1e7b066a\" (UID: \"f573ce6f-7377-4608-8cee-357a1e7b066a\") " Jan 27 12:26:06 crc kubenswrapper[4745]: I0127 12:26:06.088619 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f573ce6f-7377-4608-8cee-357a1e7b066a-utilities\") pod \"f573ce6f-7377-4608-8cee-357a1e7b066a\" (UID: \"f573ce6f-7377-4608-8cee-357a1e7b066a\") " Jan 27 12:26:06 crc kubenswrapper[4745]: I0127 12:26:06.088679 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f573ce6f-7377-4608-8cee-357a1e7b066a-catalog-content\") pod \"f573ce6f-7377-4608-8cee-357a1e7b066a\" (UID: \"f573ce6f-7377-4608-8cee-357a1e7b066a\") " Jan 27 12:26:06 crc kubenswrapper[4745]: I0127 12:26:06.090857 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f573ce6f-7377-4608-8cee-357a1e7b066a-utilities" (OuterVolumeSpecName: "utilities") pod "f573ce6f-7377-4608-8cee-357a1e7b066a" (UID: "f573ce6f-7377-4608-8cee-357a1e7b066a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:26:06 crc kubenswrapper[4745]: I0127 12:26:06.127937 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f573ce6f-7377-4608-8cee-357a1e7b066a-kube-api-access-xvjgc" (OuterVolumeSpecName: "kube-api-access-xvjgc") pod "f573ce6f-7377-4608-8cee-357a1e7b066a" (UID: "f573ce6f-7377-4608-8cee-357a1e7b066a"). InnerVolumeSpecName "kube-api-access-xvjgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:26:06 crc kubenswrapper[4745]: I0127 12:26:06.134483 4745 scope.go:117] "RemoveContainer" containerID="e62a072aa9d1db29fd0d98ba34a6ef27be11b01d788fd0f1ccb79f487fef5572" Jan 27 12:26:06 crc kubenswrapper[4745]: I0127 12:26:06.181382 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-qb7l2"] Jan 27 12:26:06 crc kubenswrapper[4745]: I0127 12:26:06.191965 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvjgc\" (UniqueName: \"kubernetes.io/projected/f573ce6f-7377-4608-8cee-357a1e7b066a-kube-api-access-xvjgc\") on node \"crc\" DevicePath \"\"" Jan 27 12:26:06 crc kubenswrapper[4745]: I0127 12:26:06.192086 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f573ce6f-7377-4608-8cee-357a1e7b066a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:26:06 crc kubenswrapper[4745]: I0127 12:26:06.251975 4745 scope.go:117] "RemoveContainer" containerID="5c52caa53719eb420556f482b28c81a26d02fb0bb84c7b50a447401a5fc56727" Jan 27 12:26:06 crc kubenswrapper[4745]: I0127 12:26:06.267250 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f573ce6f-7377-4608-8cee-357a1e7b066a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f573ce6f-7377-4608-8cee-357a1e7b066a" (UID: "f573ce6f-7377-4608-8cee-357a1e7b066a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:26:06 crc kubenswrapper[4745]: W0127 12:26:06.277242 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc6dbeab_7b0b_4924_a7d9_b1a27b0740cd.slice/crio-45291436e5c25c845461fa6bb3931e65194c4ebfc9e24d1c087ded9fec29d8d9 WatchSource:0}: Error finding container 45291436e5c25c845461fa6bb3931e65194c4ebfc9e24d1c087ded9fec29d8d9: Status 404 returned error can't find the container with id 45291436e5c25c845461fa6bb3931e65194c4ebfc9e24d1c087ded9fec29d8d9 Jan 27 12:26:06 crc kubenswrapper[4745]: I0127 12:26:06.294592 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f573ce6f-7377-4608-8cee-357a1e7b066a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:26:06 crc kubenswrapper[4745]: I0127 12:26:06.403446 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pfv8j"] Jan 27 12:26:06 crc kubenswrapper[4745]: I0127 12:26:06.406785 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pfv8j"] Jan 27 12:26:06 crc kubenswrapper[4745]: I0127 12:26:06.445633 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-lvn25"] Jan 27 12:26:06 crc kubenswrapper[4745]: W0127 12:26:06.466956 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34e4a875_3a3e_43ea_9092_887c194579c5.slice/crio-9d918095978d5341c97236f1d56e61497c7194fbf708cb4cdedc30ead2a8825b WatchSource:0}: Error finding container 9d918095978d5341c97236f1d56e61497c7194fbf708cb4cdedc30ead2a8825b: Status 404 returned error can't find the container with id 9d918095978d5341c97236f1d56e61497c7194fbf708cb4cdedc30ead2a8825b Jan 27 12:26:06 crc kubenswrapper[4745]: I0127 12:26:06.541132 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-nr9lp"] Jan 27 12:26:06 crc kubenswrapper[4745]: I0127 12:26:06.785194 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-wzghm"] Jan 27 12:26:06 crc kubenswrapper[4745]: I0127 12:26:06.797515 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-78f4x"] Jan 27 12:26:06 crc kubenswrapper[4745]: W0127 12:26:06.798236 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d8f40f3_b76c_4b87_8b96_dbb564ce0b8d.slice/crio-62d14e8b1663a575bd64296c63e823790dc687f731a902ed0e48dd924777f20e WatchSource:0}: Error finding container 62d14e8b1663a575bd64296c63e823790dc687f731a902ed0e48dd924777f20e: Status 404 returned error can't find the container with id 62d14e8b1663a575bd64296c63e823790dc687f731a902ed0e48dd924777f20e Jan 27 12:26:06 crc kubenswrapper[4745]: W0127 12:26:06.811145 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fc8aa52_b047_4344_b175_e5b58f406459.slice/crio-a960888433cbc36bc3fc22afaacb6bed23165c19b7ca08368bb8fb10acd6a791 WatchSource:0}: Error finding container a960888433cbc36bc3fc22afaacb6bed23165c19b7ca08368bb8fb10acd6a791: Status 404 returned error can't find the container with id a960888433cbc36bc3fc22afaacb6bed23165c19b7ca08368bb8fb10acd6a791 Jan 27 12:26:07 crc kubenswrapper[4745]: I0127 12:26:07.083179 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-78f4x" event={"ID":"2fc8aa52-b047-4344-b175-e5b58f406459","Type":"ContainerStarted","Data":"a960888433cbc36bc3fc22afaacb6bed23165c19b7ca08368bb8fb10acd6a791"} Jan 27 12:26:07 crc kubenswrapper[4745]: I0127 12:26:07.088055 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lvn25" event={"ID":"34e4a875-3a3e-43ea-9092-887c194579c5","Type":"ContainerStarted","Data":"9d918095978d5341c97236f1d56e61497c7194fbf708cb4cdedc30ead2a8825b"} Jan 27 12:26:07 crc kubenswrapper[4745]: I0127 12:26:07.088979 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-wzghm" event={"ID":"7d8f40f3-b76c-4b87-8b96-dbb564ce0b8d","Type":"ContainerStarted","Data":"62d14e8b1663a575bd64296c63e823790dc687f731a902ed0e48dd924777f20e"} Jan 27 12:26:07 crc kubenswrapper[4745]: I0127 12:26:07.090905 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-nr9lp" event={"ID":"a76fb56b-8fc4-48d9-a356-b1e369938f0f","Type":"ContainerStarted","Data":"bb16ea9d5e5256f93e6da58a788f8fe814ba42f5ba72d506a6d0f9a2523d245c"} Jan 27 12:26:07 crc kubenswrapper[4745]: I0127 12:26:07.091869 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-qb7l2" event={"ID":"cc6dbeab-7b0b-4924-a7d9-b1a27b0740cd","Type":"ContainerStarted","Data":"45291436e5c25c845461fa6bb3931e65194c4ebfc9e24d1c087ded9fec29d8d9"} Jan 27 12:26:08 crc kubenswrapper[4745]: I0127 12:26:08.084306 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f573ce6f-7377-4608-8cee-357a1e7b066a" path="/var/lib/kubelet/pods/f573ce6f-7377-4608-8cee-357a1e7b066a/volumes" Jan 27 12:26:26 crc kubenswrapper[4745]: E0127 12:26:26.657187 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" Jan 27 12:26:26 crc kubenswrapper[4745]: E0127 12:26:26.657874 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator --watch-referenced-objects-in-all-namespaces=true --disable-unmanaged-prometheus-configuration=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xkbzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-68bc856cb9-lvn25_openshift-operators(34e4a875-3a3e-43ea-9092-887c194579c5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 12:26:26 crc kubenswrapper[4745]: E0127 12:26:26.658965 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lvn25" podUID="34e4a875-3a3e-43ea-9092-887c194579c5" Jan 27 12:26:27 crc kubenswrapper[4745]: E0127 12:26:27.827784 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lvn25" podUID="34e4a875-3a3e-43ea-9092-887c194579c5" Jan 27 12:26:28 crc kubenswrapper[4745]: I0127 12:26:28.472073 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-qb7l2" event={"ID":"cc6dbeab-7b0b-4924-a7d9-b1a27b0740cd","Type":"ContainerStarted","Data":"19eab3860e714899a8a230f93993c03ff58f3b8bdf5b731a6df6ff60c1e0f4fe"} Jan 27 12:26:28 crc kubenswrapper[4745]: I0127 12:26:28.475531 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-78f4x" event={"ID":"2fc8aa52-b047-4344-b175-e5b58f406459","Type":"ContainerStarted","Data":"e6a7254da306a93ea92bce8fd4fd3f2fbce1b053dc869b80641123ed56f7d025"} Jan 27 12:26:28 crc kubenswrapper[4745]: I0127 12:26:28.476337 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-78f4x" Jan 27 12:26:28 crc kubenswrapper[4745]: I0127 12:26:28.477418 4745 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-78f4x container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.42:8081/healthz\": dial tcp 10.217.0.42:8081: connect: connection refused" start-of-body= Jan 27 12:26:28 crc kubenswrapper[4745]: I0127 12:26:28.477448 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-78f4x" podUID="2fc8aa52-b047-4344-b175-e5b58f406459" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.42:8081/healthz\": dial tcp 10.217.0.42:8081: connect: connection refused" Jan 27 12:26:28 crc kubenswrapper[4745]: I0127 12:26:28.480105 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-wzghm" event={"ID":"7d8f40f3-b76c-4b87-8b96-dbb564ce0b8d","Type":"ContainerStarted","Data":"402d563625d817d6f11ae141f063d3f4b48b54f6a4f8fa02324eaddb763ef6bc"} Jan 27 12:26:28 crc kubenswrapper[4745]: I0127 12:26:28.480667 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-wzghm" Jan 27 12:26:28 crc kubenswrapper[4745]: I0127 12:26:28.482506 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-nr9lp" event={"ID":"a76fb56b-8fc4-48d9-a356-b1e369938f0f","Type":"ContainerStarted","Data":"83c4164bd6f819a387735a772d7c9a1b7a032685f03b11890299ce2cdd132066"} Jan 27 12:26:28 crc kubenswrapper[4745]: I0127 12:26:28.495669 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-qb7l2" podStartSLOduration=2.8248047659999997 podStartE2EDuration="24.495651811s" podCreationTimestamp="2026-01-27 12:26:04 +0000 UTC" firstStartedPulling="2026-01-27 12:26:06.310512389 +0000 UTC m=+859.115423077" lastFinishedPulling="2026-01-27 12:26:27.981359444 +0000 UTC m=+880.786270122" observedRunningTime="2026-01-27 12:26:28.493674974 +0000 UTC m=+881.298585672" watchObservedRunningTime="2026-01-27 12:26:28.495651811 +0000 UTC m=+881.300562499" Jan 27 12:26:28 crc kubenswrapper[4745]: I0127 12:26:28.526539 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84d7487956-nr9lp" podStartSLOduration=3.098644001 podStartE2EDuration="24.526521151s" podCreationTimestamp="2026-01-27 12:26:04 +0000 UTC" firstStartedPulling="2026-01-27 12:26:06.551704293 +0000 UTC m=+859.356614981" lastFinishedPulling="2026-01-27 12:26:27.979581433 +0000 UTC m=+880.784492131" observedRunningTime="2026-01-27 12:26:28.52404287 +0000 UTC m=+881.328953558" watchObservedRunningTime="2026-01-27 12:26:28.526521151 +0000 UTC m=+881.331431839" Jan 27 12:26:28 crc kubenswrapper[4745]: I0127 12:26:28.552757 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-78f4x" podStartSLOduration=3.32304049 podStartE2EDuration="24.552741677s" podCreationTimestamp="2026-01-27 12:26:04 +0000 UTC" firstStartedPulling="2026-01-27 12:26:06.813963614 +0000 UTC m=+859.618874302" lastFinishedPulling="2026-01-27 12:26:28.043664801 +0000 UTC m=+880.848575489" observedRunningTime="2026-01-27 12:26:28.547330611 +0000 UTC m=+881.352241299" watchObservedRunningTime="2026-01-27 12:26:28.552741677 +0000 UTC m=+881.357652365" Jan 27 12:26:28 crc kubenswrapper[4745]: I0127 12:26:28.569204 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-wzghm" podStartSLOduration=2.395616043 podStartE2EDuration="23.569180621s" podCreationTimestamp="2026-01-27 12:26:05 +0000 UTC" firstStartedPulling="2026-01-27 12:26:06.807061545 +0000 UTC m=+859.611972243" lastFinishedPulling="2026-01-27 12:26:27.980626133 +0000 UTC m=+880.785536821" observedRunningTime="2026-01-27 12:26:28.563116156 +0000 UTC m=+881.368026844" watchObservedRunningTime="2026-01-27 12:26:28.569180621 +0000 UTC m=+881.374091309" Jan 27 12:26:29 crc kubenswrapper[4745]: I0127 12:26:29.512507 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-78f4x" Jan 27 12:26:35 crc kubenswrapper[4745]: I0127 12:26:35.576528 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-wzghm" Jan 27 12:26:35 crc kubenswrapper[4745]: I0127 12:26:35.966927 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:26:35 crc kubenswrapper[4745]: I0127 12:26:35.967004 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:26:35 crc kubenswrapper[4745]: I0127 12:26:35.967061 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:26:35 crc kubenswrapper[4745]: I0127 12:26:35.967803 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0a05fb56de3f4f4964f4b329a07b5860a6f3e32e5425eaf7e81fdbe26e1e74c6"} pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 12:26:35 crc kubenswrapper[4745]: I0127 12:26:35.967899 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" containerID="cri-o://0a05fb56de3f4f4964f4b329a07b5860a6f3e32e5425eaf7e81fdbe26e1e74c6" gracePeriod=600 Jan 27 12:26:37 crc kubenswrapper[4745]: I0127 12:26:37.558526 4745 generic.go:334] "Generic (PLEG): container finished" podID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerID="0a05fb56de3f4f4964f4b329a07b5860a6f3e32e5425eaf7e81fdbe26e1e74c6" exitCode=0 Jan 27 12:26:37 crc kubenswrapper[4745]: I0127 12:26:37.558604 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerDied","Data":"0a05fb56de3f4f4964f4b329a07b5860a6f3e32e5425eaf7e81fdbe26e1e74c6"} Jan 27 12:26:37 crc kubenswrapper[4745]: I0127 12:26:37.558786 4745 scope.go:117] "RemoveContainer" containerID="1d165911e4b0e3c594550d27c4ba050dc9d70726bdb582f151b9ce410a17826f" Jan 27 12:26:38 crc kubenswrapper[4745]: I0127 12:26:38.567290 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerStarted","Data":"4bf427099ba09136d50759e57a90c739bd38ee9b8bfd72165c113357c32a5692"} Jan 27 12:26:47 crc kubenswrapper[4745]: I0127 12:26:47.623894 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lvn25" event={"ID":"34e4a875-3a3e-43ea-9092-887c194579c5","Type":"ContainerStarted","Data":"2cce6db7bc791738d3d7e4c6d0c0126a3e8375e6c40e1939ada02abec7d975b6"} Jan 27 12:26:47 crc kubenswrapper[4745]: I0127 12:26:47.652307 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lvn25" podStartSLOduration=3.224812969 podStartE2EDuration="43.65228879s" podCreationTimestamp="2026-01-27 12:26:04 +0000 UTC" firstStartedPulling="2026-01-27 12:26:06.469497293 +0000 UTC m=+859.274407981" lastFinishedPulling="2026-01-27 12:26:46.896973114 +0000 UTC m=+899.701883802" observedRunningTime="2026-01-27 12:26:47.651320752 +0000 UTC m=+900.456231440" watchObservedRunningTime="2026-01-27 12:26:47.65228879 +0000 UTC m=+900.457199478" Jan 27 12:27:06 crc kubenswrapper[4745]: I0127 12:27:06.169618 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl"] Jan 27 12:27:06 crc kubenswrapper[4745]: E0127 12:27:06.170309 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f573ce6f-7377-4608-8cee-357a1e7b066a" containerName="extract-content" Jan 27 12:27:06 crc kubenswrapper[4745]: I0127 12:27:06.170323 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f573ce6f-7377-4608-8cee-357a1e7b066a" containerName="extract-content" Jan 27 12:27:06 crc kubenswrapper[4745]: E0127 12:27:06.170334 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f573ce6f-7377-4608-8cee-357a1e7b066a" containerName="extract-utilities" Jan 27 12:27:06 crc kubenswrapper[4745]: I0127 12:27:06.170341 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f573ce6f-7377-4608-8cee-357a1e7b066a" containerName="extract-utilities" Jan 27 12:27:06 crc kubenswrapper[4745]: E0127 12:27:06.170354 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f573ce6f-7377-4608-8cee-357a1e7b066a" containerName="registry-server" Jan 27 12:27:06 crc kubenswrapper[4745]: I0127 12:27:06.170360 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f573ce6f-7377-4608-8cee-357a1e7b066a" containerName="registry-server" Jan 27 12:27:06 crc kubenswrapper[4745]: I0127 12:27:06.170455 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f573ce6f-7377-4608-8cee-357a1e7b066a" containerName="registry-server" Jan 27 12:27:06 crc kubenswrapper[4745]: I0127 12:27:06.171207 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl" Jan 27 12:27:06 crc kubenswrapper[4745]: I0127 12:27:06.173458 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 12:27:06 crc kubenswrapper[4745]: I0127 12:27:06.184174 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl"] Jan 27 12:27:06 crc kubenswrapper[4745]: I0127 12:27:06.189345 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8a79568-f1f6-4fda-9a87-6c232bb3b9a6-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl\" (UID: \"a8a79568-f1f6-4fda-9a87-6c232bb3b9a6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl" Jan 27 12:27:06 crc kubenswrapper[4745]: I0127 12:27:06.189426 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9w8k\" (UniqueName: \"kubernetes.io/projected/a8a79568-f1f6-4fda-9a87-6c232bb3b9a6-kube-api-access-x9w8k\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl\" (UID: \"a8a79568-f1f6-4fda-9a87-6c232bb3b9a6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl" Jan 27 12:27:06 crc kubenswrapper[4745]: I0127 12:27:06.189449 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8a79568-f1f6-4fda-9a87-6c232bb3b9a6-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl\" (UID: \"a8a79568-f1f6-4fda-9a87-6c232bb3b9a6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl" Jan 27 12:27:06 crc kubenswrapper[4745]: I0127 12:27:06.290074 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8a79568-f1f6-4fda-9a87-6c232bb3b9a6-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl\" (UID: \"a8a79568-f1f6-4fda-9a87-6c232bb3b9a6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl" Jan 27 12:27:06 crc kubenswrapper[4745]: I0127 12:27:06.290167 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9w8k\" (UniqueName: \"kubernetes.io/projected/a8a79568-f1f6-4fda-9a87-6c232bb3b9a6-kube-api-access-x9w8k\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl\" (UID: \"a8a79568-f1f6-4fda-9a87-6c232bb3b9a6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl" Jan 27 12:27:06 crc kubenswrapper[4745]: I0127 12:27:06.290190 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8a79568-f1f6-4fda-9a87-6c232bb3b9a6-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl\" (UID: \"a8a79568-f1f6-4fda-9a87-6c232bb3b9a6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl" Jan 27 12:27:06 crc kubenswrapper[4745]: I0127 12:27:06.290677 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8a79568-f1f6-4fda-9a87-6c232bb3b9a6-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl\" (UID: \"a8a79568-f1f6-4fda-9a87-6c232bb3b9a6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl" Jan 27 12:27:06 crc kubenswrapper[4745]: I0127 12:27:06.290933 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8a79568-f1f6-4fda-9a87-6c232bb3b9a6-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl\" (UID: \"a8a79568-f1f6-4fda-9a87-6c232bb3b9a6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl" Jan 27 12:27:06 crc kubenswrapper[4745]: I0127 12:27:06.309073 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9w8k\" (UniqueName: \"kubernetes.io/projected/a8a79568-f1f6-4fda-9a87-6c232bb3b9a6-kube-api-access-x9w8k\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl\" (UID: \"a8a79568-f1f6-4fda-9a87-6c232bb3b9a6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl" Jan 27 12:27:06 crc kubenswrapper[4745]: I0127 12:27:06.491203 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl" Jan 27 12:27:06 crc kubenswrapper[4745]: I0127 12:27:06.944147 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl"] Jan 27 12:27:07 crc kubenswrapper[4745]: I0127 12:27:07.780503 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8a79568-f1f6-4fda-9a87-6c232bb3b9a6" containerID="75109297004cc739d7adb5b75f8c6668308ca7479681accd61e0e7228879e22a" exitCode=0 Jan 27 12:27:07 crc kubenswrapper[4745]: I0127 12:27:07.780552 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl" event={"ID":"a8a79568-f1f6-4fda-9a87-6c232bb3b9a6","Type":"ContainerDied","Data":"75109297004cc739d7adb5b75f8c6668308ca7479681accd61e0e7228879e22a"} Jan 27 12:27:07 crc kubenswrapper[4745]: I0127 12:27:07.780583 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl" event={"ID":"a8a79568-f1f6-4fda-9a87-6c232bb3b9a6","Type":"ContainerStarted","Data":"7564912c2a92154710e3f1b6ae9f18162d6f5e3b98f47d40612e1f0e675d2bfe"} Jan 27 12:27:10 crc kubenswrapper[4745]: I0127 12:27:10.798315 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8a79568-f1f6-4fda-9a87-6c232bb3b9a6" containerID="777b74efe12846b6572b94cfd4a98ee1304b34886693d026df2a354d89280389" exitCode=0 Jan 27 12:27:10 crc kubenswrapper[4745]: I0127 12:27:10.798415 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl" event={"ID":"a8a79568-f1f6-4fda-9a87-6c232bb3b9a6","Type":"ContainerDied","Data":"777b74efe12846b6572b94cfd4a98ee1304b34886693d026df2a354d89280389"} Jan 27 12:27:11 crc kubenswrapper[4745]: I0127 12:27:11.804929 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8a79568-f1f6-4fda-9a87-6c232bb3b9a6" containerID="333bc85bd1328053ed4f128459afb4329178faaaa52eb210efdc64d70a32f0b1" exitCode=0 Jan 27 12:27:11 crc kubenswrapper[4745]: I0127 12:27:11.804980 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl" event={"ID":"a8a79568-f1f6-4fda-9a87-6c232bb3b9a6","Type":"ContainerDied","Data":"333bc85bd1328053ed4f128459afb4329178faaaa52eb210efdc64d70a32f0b1"} Jan 27 12:27:13 crc kubenswrapper[4745]: I0127 12:27:13.007850 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl" Jan 27 12:27:13 crc kubenswrapper[4745]: I0127 12:27:13.179725 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8a79568-f1f6-4fda-9a87-6c232bb3b9a6-bundle\") pod \"a8a79568-f1f6-4fda-9a87-6c232bb3b9a6\" (UID: \"a8a79568-f1f6-4fda-9a87-6c232bb3b9a6\") " Jan 27 12:27:13 crc kubenswrapper[4745]: I0127 12:27:13.179876 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8a79568-f1f6-4fda-9a87-6c232bb3b9a6-util\") pod \"a8a79568-f1f6-4fda-9a87-6c232bb3b9a6\" (UID: \"a8a79568-f1f6-4fda-9a87-6c232bb3b9a6\") " Jan 27 12:27:13 crc kubenswrapper[4745]: I0127 12:27:13.179940 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9w8k\" (UniqueName: \"kubernetes.io/projected/a8a79568-f1f6-4fda-9a87-6c232bb3b9a6-kube-api-access-x9w8k\") pod \"a8a79568-f1f6-4fda-9a87-6c232bb3b9a6\" (UID: \"a8a79568-f1f6-4fda-9a87-6c232bb3b9a6\") " Jan 27 12:27:13 crc kubenswrapper[4745]: I0127 12:27:13.641259 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8a79568-f1f6-4fda-9a87-6c232bb3b9a6-kube-api-access-x9w8k" (OuterVolumeSpecName: "kube-api-access-x9w8k") pod "a8a79568-f1f6-4fda-9a87-6c232bb3b9a6" (UID: "a8a79568-f1f6-4fda-9a87-6c232bb3b9a6"). InnerVolumeSpecName "kube-api-access-x9w8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:27:13 crc kubenswrapper[4745]: I0127 12:27:13.643526 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8a79568-f1f6-4fda-9a87-6c232bb3b9a6-bundle" (OuterVolumeSpecName: "bundle") pod "a8a79568-f1f6-4fda-9a87-6c232bb3b9a6" (UID: "a8a79568-f1f6-4fda-9a87-6c232bb3b9a6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:27:13 crc kubenswrapper[4745]: I0127 12:27:13.667272 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8a79568-f1f6-4fda-9a87-6c232bb3b9a6-util" (OuterVolumeSpecName: "util") pod "a8a79568-f1f6-4fda-9a87-6c232bb3b9a6" (UID: "a8a79568-f1f6-4fda-9a87-6c232bb3b9a6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:27:13 crc kubenswrapper[4745]: I0127 12:27:13.686499 4745 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8a79568-f1f6-4fda-9a87-6c232bb3b9a6-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 12:27:13 crc kubenswrapper[4745]: I0127 12:27:13.686565 4745 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8a79568-f1f6-4fda-9a87-6c232bb3b9a6-util\") on node \"crc\" DevicePath \"\"" Jan 27 12:27:13 crc kubenswrapper[4745]: I0127 12:27:13.686577 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9w8k\" (UniqueName: \"kubernetes.io/projected/a8a79568-f1f6-4fda-9a87-6c232bb3b9a6-kube-api-access-x9w8k\") on node \"crc\" DevicePath \"\"" Jan 27 12:27:13 crc kubenswrapper[4745]: I0127 12:27:13.817542 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl" event={"ID":"a8a79568-f1f6-4fda-9a87-6c232bb3b9a6","Type":"ContainerDied","Data":"7564912c2a92154710e3f1b6ae9f18162d6f5e3b98f47d40612e1f0e675d2bfe"} Jan 27 12:27:13 crc kubenswrapper[4745]: I0127 12:27:13.817599 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7564912c2a92154710e3f1b6ae9f18162d6f5e3b98f47d40612e1f0e675d2bfe" Jan 27 12:27:13 crc kubenswrapper[4745]: I0127 12:27:13.817643 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl" Jan 27 12:27:17 crc kubenswrapper[4745]: I0127 12:27:17.715468 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-gwj5j"] Jan 27 12:27:17 crc kubenswrapper[4745]: E0127 12:27:17.716309 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8a79568-f1f6-4fda-9a87-6c232bb3b9a6" containerName="extract" Jan 27 12:27:17 crc kubenswrapper[4745]: I0127 12:27:17.716349 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8a79568-f1f6-4fda-9a87-6c232bb3b9a6" containerName="extract" Jan 27 12:27:17 crc kubenswrapper[4745]: E0127 12:27:17.716369 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8a79568-f1f6-4fda-9a87-6c232bb3b9a6" containerName="pull" Jan 27 12:27:17 crc kubenswrapper[4745]: I0127 12:27:17.716382 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8a79568-f1f6-4fda-9a87-6c232bb3b9a6" containerName="pull" Jan 27 12:27:17 crc kubenswrapper[4745]: E0127 12:27:17.716398 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8a79568-f1f6-4fda-9a87-6c232bb3b9a6" containerName="util" Jan 27 12:27:17 crc kubenswrapper[4745]: I0127 12:27:17.716408 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8a79568-f1f6-4fda-9a87-6c232bb3b9a6" containerName="util" Jan 27 12:27:17 crc kubenswrapper[4745]: I0127 12:27:17.716521 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8a79568-f1f6-4fda-9a87-6c232bb3b9a6" containerName="extract" Jan 27 12:27:17 crc kubenswrapper[4745]: I0127 12:27:17.716967 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-gwj5j" Jan 27 12:27:17 crc kubenswrapper[4745]: I0127 12:27:17.719368 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 12:27:17 crc kubenswrapper[4745]: I0127 12:27:17.719876 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 12:27:17 crc kubenswrapper[4745]: I0127 12:27:17.730592 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-76jt7" Jan 27 12:27:17 crc kubenswrapper[4745]: I0127 12:27:17.741711 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-gwj5j"] Jan 27 12:27:17 crc kubenswrapper[4745]: I0127 12:27:17.839805 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thrm4\" (UniqueName: \"kubernetes.io/projected/c9455dbe-15f6-4d1b-ad15-2d5108ded02e-kube-api-access-thrm4\") pod \"nmstate-operator-646758c888-gwj5j\" (UID: \"c9455dbe-15f6-4d1b-ad15-2d5108ded02e\") " pod="openshift-nmstate/nmstate-operator-646758c888-gwj5j" Jan 27 12:27:17 crc kubenswrapper[4745]: I0127 12:27:17.941371 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thrm4\" (UniqueName: \"kubernetes.io/projected/c9455dbe-15f6-4d1b-ad15-2d5108ded02e-kube-api-access-thrm4\") pod \"nmstate-operator-646758c888-gwj5j\" (UID: \"c9455dbe-15f6-4d1b-ad15-2d5108ded02e\") " pod="openshift-nmstate/nmstate-operator-646758c888-gwj5j" Jan 27 12:27:17 crc kubenswrapper[4745]: I0127 12:27:17.967029 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thrm4\" (UniqueName: \"kubernetes.io/projected/c9455dbe-15f6-4d1b-ad15-2d5108ded02e-kube-api-access-thrm4\") pod \"nmstate-operator-646758c888-gwj5j\" (UID: \"c9455dbe-15f6-4d1b-ad15-2d5108ded02e\") " pod="openshift-nmstate/nmstate-operator-646758c888-gwj5j" Jan 27 12:27:18 crc kubenswrapper[4745]: I0127 12:27:18.037341 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-gwj5j" Jan 27 12:27:18 crc kubenswrapper[4745]: I0127 12:27:18.231182 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-gwj5j"] Jan 27 12:27:18 crc kubenswrapper[4745]: I0127 12:27:18.842577 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-gwj5j" event={"ID":"c9455dbe-15f6-4d1b-ad15-2d5108ded02e","Type":"ContainerStarted","Data":"925ee1d621087618d5e641d87b1b42af300084ca573a7bf7dc772258f87a5c08"} Jan 27 12:27:29 crc kubenswrapper[4745]: I0127 12:27:29.906988 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-gwj5j" event={"ID":"c9455dbe-15f6-4d1b-ad15-2d5108ded02e","Type":"ContainerStarted","Data":"c7ea09fd8671dbab48bda475d675c74ff4582dc0dfef9c769664bd2bf401f972"} Jan 27 12:27:29 crc kubenswrapper[4745]: I0127 12:27:29.927276 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-gwj5j" podStartSLOduration=1.9748486729999999 podStartE2EDuration="12.927254263s" podCreationTimestamp="2026-01-27 12:27:17 +0000 UTC" firstStartedPulling="2026-01-27 12:27:18.245285909 +0000 UTC m=+931.050196597" lastFinishedPulling="2026-01-27 12:27:29.197691499 +0000 UTC m=+942.002602187" observedRunningTime="2026-01-27 12:27:29.923552026 +0000 UTC m=+942.728462724" watchObservedRunningTime="2026-01-27 12:27:29.927254263 +0000 UTC m=+942.732164951" Jan 27 12:27:30 crc kubenswrapper[4745]: I0127 12:27:30.973358 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-dk5k8"] Jan 27 12:27:30 crc kubenswrapper[4745]: I0127 12:27:30.974977 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-dk5k8" Jan 27 12:27:30 crc kubenswrapper[4745]: I0127 12:27:30.977939 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-mhlqs" Jan 27 12:27:30 crc kubenswrapper[4745]: I0127 12:27:30.984563 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-5gvmk"] Jan 27 12:27:30 crc kubenswrapper[4745]: I0127 12:27:30.985422 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5gvmk" Jan 27 12:27:30 crc kubenswrapper[4745]: I0127 12:27:30.990049 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-dk5k8"] Jan 27 12:27:30 crc kubenswrapper[4745]: I0127 12:27:30.993602 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.000723 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-5gvmk"] Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.019800 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-5bm8w"] Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.020733 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5bm8w" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.118646 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6864b9ac-a4d6-46c5-b994-9710da668093-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-5gvmk\" (UID: \"6864b9ac-a4d6-46c5-b994-9710da668093\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5gvmk" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.118701 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/43ca3915-5425-4595-84b8-dd3c7fc696f3-ovs-socket\") pod \"nmstate-handler-5bm8w\" (UID: \"43ca3915-5425-4595-84b8-dd3c7fc696f3\") " pod="openshift-nmstate/nmstate-handler-5bm8w" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.118801 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww9vm\" (UniqueName: \"kubernetes.io/projected/29a40d2d-f958-4b3a-ac04-0c817c5aa6ad-kube-api-access-ww9vm\") pod \"nmstate-metrics-54757c584b-dk5k8\" (UID: \"29a40d2d-f958-4b3a-ac04-0c817c5aa6ad\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-dk5k8" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.118831 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/43ca3915-5425-4595-84b8-dd3c7fc696f3-nmstate-lock\") pod \"nmstate-handler-5bm8w\" (UID: \"43ca3915-5425-4595-84b8-dd3c7fc696f3\") " pod="openshift-nmstate/nmstate-handler-5bm8w" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.119003 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/43ca3915-5425-4595-84b8-dd3c7fc696f3-dbus-socket\") pod \"nmstate-handler-5bm8w\" (UID: \"43ca3915-5425-4595-84b8-dd3c7fc696f3\") " pod="openshift-nmstate/nmstate-handler-5bm8w" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.119090 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fds7\" (UniqueName: \"kubernetes.io/projected/43ca3915-5425-4595-84b8-dd3c7fc696f3-kube-api-access-7fds7\") pod \"nmstate-handler-5bm8w\" (UID: \"43ca3915-5425-4595-84b8-dd3c7fc696f3\") " pod="openshift-nmstate/nmstate-handler-5bm8w" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.119232 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwzwx\" (UniqueName: \"kubernetes.io/projected/6864b9ac-a4d6-46c5-b994-9710da668093-kube-api-access-jwzwx\") pod \"nmstate-webhook-8474b5b9d8-5gvmk\" (UID: \"6864b9ac-a4d6-46c5-b994-9710da668093\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5gvmk" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.120265 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc5sv"] Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.121512 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc5sv" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.123175 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.123307 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-wm6tc" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.123581 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.146914 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc5sv"] Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.220112 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/43ca3915-5425-4595-84b8-dd3c7fc696f3-dbus-socket\") pod \"nmstate-handler-5bm8w\" (UID: \"43ca3915-5425-4595-84b8-dd3c7fc696f3\") " pod="openshift-nmstate/nmstate-handler-5bm8w" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.220427 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fds7\" (UniqueName: \"kubernetes.io/projected/43ca3915-5425-4595-84b8-dd3c7fc696f3-kube-api-access-7fds7\") pod \"nmstate-handler-5bm8w\" (UID: \"43ca3915-5425-4595-84b8-dd3c7fc696f3\") " pod="openshift-nmstate/nmstate-handler-5bm8w" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.220550 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b7bf861-ac3a-4232-9783-6b7662b6c69b-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zc5sv\" (UID: \"7b7bf861-ac3a-4232-9783-6b7662b6c69b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc5sv" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.220759 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7b7bf861-ac3a-4232-9783-6b7662b6c69b-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-zc5sv\" (UID: \"7b7bf861-ac3a-4232-9783-6b7662b6c69b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc5sv" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.220442 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/43ca3915-5425-4595-84b8-dd3c7fc696f3-dbus-socket\") pod \"nmstate-handler-5bm8w\" (UID: \"43ca3915-5425-4595-84b8-dd3c7fc696f3\") " pod="openshift-nmstate/nmstate-handler-5bm8w" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.221932 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwzwx\" (UniqueName: \"kubernetes.io/projected/6864b9ac-a4d6-46c5-b994-9710da668093-kube-api-access-jwzwx\") pod \"nmstate-webhook-8474b5b9d8-5gvmk\" (UID: \"6864b9ac-a4d6-46c5-b994-9710da668093\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5gvmk" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.222060 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6864b9ac-a4d6-46c5-b994-9710da668093-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-5gvmk\" (UID: \"6864b9ac-a4d6-46c5-b994-9710da668093\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5gvmk" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.222934 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/43ca3915-5425-4595-84b8-dd3c7fc696f3-ovs-socket\") pod \"nmstate-handler-5bm8w\" (UID: \"43ca3915-5425-4595-84b8-dd3c7fc696f3\") " pod="openshift-nmstate/nmstate-handler-5bm8w" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.222993 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/43ca3915-5425-4595-84b8-dd3c7fc696f3-ovs-socket\") pod \"nmstate-handler-5bm8w\" (UID: \"43ca3915-5425-4595-84b8-dd3c7fc696f3\") " pod="openshift-nmstate/nmstate-handler-5bm8w" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.223137 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww9vm\" (UniqueName: \"kubernetes.io/projected/29a40d2d-f958-4b3a-ac04-0c817c5aa6ad-kube-api-access-ww9vm\") pod \"nmstate-metrics-54757c584b-dk5k8\" (UID: \"29a40d2d-f958-4b3a-ac04-0c817c5aa6ad\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-dk5k8" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.223330 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/43ca3915-5425-4595-84b8-dd3c7fc696f3-nmstate-lock\") pod \"nmstate-handler-5bm8w\" (UID: \"43ca3915-5425-4595-84b8-dd3c7fc696f3\") " pod="openshift-nmstate/nmstate-handler-5bm8w" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.223433 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mjgv\" (UniqueName: \"kubernetes.io/projected/7b7bf861-ac3a-4232-9783-6b7662b6c69b-kube-api-access-8mjgv\") pod \"nmstate-console-plugin-7754f76f8b-zc5sv\" (UID: \"7b7bf861-ac3a-4232-9783-6b7662b6c69b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc5sv" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.223389 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/43ca3915-5425-4595-84b8-dd3c7fc696f3-nmstate-lock\") pod \"nmstate-handler-5bm8w\" (UID: \"43ca3915-5425-4595-84b8-dd3c7fc696f3\") " pod="openshift-nmstate/nmstate-handler-5bm8w" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.237876 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6864b9ac-a4d6-46c5-b994-9710da668093-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-5gvmk\" (UID: \"6864b9ac-a4d6-46c5-b994-9710da668093\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5gvmk" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.238574 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwzwx\" (UniqueName: \"kubernetes.io/projected/6864b9ac-a4d6-46c5-b994-9710da668093-kube-api-access-jwzwx\") pod \"nmstate-webhook-8474b5b9d8-5gvmk\" (UID: \"6864b9ac-a4d6-46c5-b994-9710da668093\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5gvmk" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.255530 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fds7\" (UniqueName: \"kubernetes.io/projected/43ca3915-5425-4595-84b8-dd3c7fc696f3-kube-api-access-7fds7\") pod \"nmstate-handler-5bm8w\" (UID: \"43ca3915-5425-4595-84b8-dd3c7fc696f3\") " pod="openshift-nmstate/nmstate-handler-5bm8w" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.258622 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww9vm\" (UniqueName: \"kubernetes.io/projected/29a40d2d-f958-4b3a-ac04-0c817c5aa6ad-kube-api-access-ww9vm\") pod \"nmstate-metrics-54757c584b-dk5k8\" (UID: \"29a40d2d-f958-4b3a-ac04-0c817c5aa6ad\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-dk5k8" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.298527 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-dk5k8" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.323321 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5gvmk" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.346710 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b7bf861-ac3a-4232-9783-6b7662b6c69b-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zc5sv\" (UID: \"7b7bf861-ac3a-4232-9783-6b7662b6c69b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc5sv" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.346796 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7b7bf861-ac3a-4232-9783-6b7662b6c69b-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-zc5sv\" (UID: \"7b7bf861-ac3a-4232-9783-6b7662b6c69b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc5sv" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.346996 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mjgv\" (UniqueName: \"kubernetes.io/projected/7b7bf861-ac3a-4232-9783-6b7662b6c69b-kube-api-access-8mjgv\") pod \"nmstate-console-plugin-7754f76f8b-zc5sv\" (UID: \"7b7bf861-ac3a-4232-9783-6b7662b6c69b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc5sv" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.347644 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5bm8w" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.348525 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7b7bf861-ac3a-4232-9783-6b7662b6c69b-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-zc5sv\" (UID: \"7b7bf861-ac3a-4232-9783-6b7662b6c69b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc5sv" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.354127 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b7bf861-ac3a-4232-9783-6b7662b6c69b-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zc5sv\" (UID: \"7b7bf861-ac3a-4232-9783-6b7662b6c69b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc5sv" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.409798 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mjgv\" (UniqueName: \"kubernetes.io/projected/7b7bf861-ac3a-4232-9783-6b7662b6c69b-kube-api-access-8mjgv\") pod \"nmstate-console-plugin-7754f76f8b-zc5sv\" (UID: \"7b7bf861-ac3a-4232-9783-6b7662b6c69b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc5sv" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.447205 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc5sv" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.508043 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7c94d9874c-gcq49"] Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.510755 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.543613 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c94d9874c-gcq49"] Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.551902 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/266257a2-93c6-48f4-baab-3fdf4505b44d-console-oauth-config\") pod \"console-7c94d9874c-gcq49\" (UID: \"266257a2-93c6-48f4-baab-3fdf4505b44d\") " pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.551959 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/266257a2-93c6-48f4-baab-3fdf4505b44d-trusted-ca-bundle\") pod \"console-7c94d9874c-gcq49\" (UID: \"266257a2-93c6-48f4-baab-3fdf4505b44d\") " pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.552013 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/266257a2-93c6-48f4-baab-3fdf4505b44d-console-config\") pod \"console-7c94d9874c-gcq49\" (UID: \"266257a2-93c6-48f4-baab-3fdf4505b44d\") " pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.552089 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/266257a2-93c6-48f4-baab-3fdf4505b44d-oauth-serving-cert\") pod \"console-7c94d9874c-gcq49\" (UID: \"266257a2-93c6-48f4-baab-3fdf4505b44d\") " pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.552139 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmd26\" (UniqueName: \"kubernetes.io/projected/266257a2-93c6-48f4-baab-3fdf4505b44d-kube-api-access-pmd26\") pod \"console-7c94d9874c-gcq49\" (UID: \"266257a2-93c6-48f4-baab-3fdf4505b44d\") " pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.552186 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/266257a2-93c6-48f4-baab-3fdf4505b44d-console-serving-cert\") pod \"console-7c94d9874c-gcq49\" (UID: \"266257a2-93c6-48f4-baab-3fdf4505b44d\") " pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.552209 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/266257a2-93c6-48f4-baab-3fdf4505b44d-service-ca\") pod \"console-7c94d9874c-gcq49\" (UID: \"266257a2-93c6-48f4-baab-3fdf4505b44d\") " pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.653757 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/266257a2-93c6-48f4-baab-3fdf4505b44d-console-oauth-config\") pod \"console-7c94d9874c-gcq49\" (UID: \"266257a2-93c6-48f4-baab-3fdf4505b44d\") " pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.653825 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/266257a2-93c6-48f4-baab-3fdf4505b44d-trusted-ca-bundle\") pod \"console-7c94d9874c-gcq49\" (UID: \"266257a2-93c6-48f4-baab-3fdf4505b44d\") " pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.653885 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/266257a2-93c6-48f4-baab-3fdf4505b44d-console-config\") pod \"console-7c94d9874c-gcq49\" (UID: \"266257a2-93c6-48f4-baab-3fdf4505b44d\") " pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.653930 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/266257a2-93c6-48f4-baab-3fdf4505b44d-oauth-serving-cert\") pod \"console-7c94d9874c-gcq49\" (UID: \"266257a2-93c6-48f4-baab-3fdf4505b44d\") " pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.653957 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmd26\" (UniqueName: \"kubernetes.io/projected/266257a2-93c6-48f4-baab-3fdf4505b44d-kube-api-access-pmd26\") pod \"console-7c94d9874c-gcq49\" (UID: \"266257a2-93c6-48f4-baab-3fdf4505b44d\") " pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.654001 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/266257a2-93c6-48f4-baab-3fdf4505b44d-console-serving-cert\") pod \"console-7c94d9874c-gcq49\" (UID: \"266257a2-93c6-48f4-baab-3fdf4505b44d\") " pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.654021 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/266257a2-93c6-48f4-baab-3fdf4505b44d-service-ca\") pod \"console-7c94d9874c-gcq49\" (UID: \"266257a2-93c6-48f4-baab-3fdf4505b44d\") " pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.677428 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/266257a2-93c6-48f4-baab-3fdf4505b44d-console-serving-cert\") pod \"console-7c94d9874c-gcq49\" (UID: \"266257a2-93c6-48f4-baab-3fdf4505b44d\") " pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.677904 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/266257a2-93c6-48f4-baab-3fdf4505b44d-console-oauth-config\") pod \"console-7c94d9874c-gcq49\" (UID: \"266257a2-93c6-48f4-baab-3fdf4505b44d\") " pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.679009 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/266257a2-93c6-48f4-baab-3fdf4505b44d-trusted-ca-bundle\") pod \"console-7c94d9874c-gcq49\" (UID: \"266257a2-93c6-48f4-baab-3fdf4505b44d\") " pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.679752 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/266257a2-93c6-48f4-baab-3fdf4505b44d-console-config\") pod \"console-7c94d9874c-gcq49\" (UID: \"266257a2-93c6-48f4-baab-3fdf4505b44d\") " pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.681520 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/266257a2-93c6-48f4-baab-3fdf4505b44d-service-ca\") pod \"console-7c94d9874c-gcq49\" (UID: \"266257a2-93c6-48f4-baab-3fdf4505b44d\") " pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.686349 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/266257a2-93c6-48f4-baab-3fdf4505b44d-oauth-serving-cert\") pod \"console-7c94d9874c-gcq49\" (UID: \"266257a2-93c6-48f4-baab-3fdf4505b44d\") " pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.698283 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmd26\" (UniqueName: \"kubernetes.io/projected/266257a2-93c6-48f4-baab-3fdf4505b44d-kube-api-access-pmd26\") pod \"console-7c94d9874c-gcq49\" (UID: \"266257a2-93c6-48f4-baab-3fdf4505b44d\") " pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.779605 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-dk5k8"] Jan 27 12:27:31 crc kubenswrapper[4745]: W0127 12:27:31.790995 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29a40d2d_f958_4b3a_ac04_0c817c5aa6ad.slice/crio-fae52c189fb564bab99f8244a9cdc05f31df39cd123c41a78fcef83ea8bb160a WatchSource:0}: Error finding container fae52c189fb564bab99f8244a9cdc05f31df39cd123c41a78fcef83ea8bb160a: Status 404 returned error can't find the container with id fae52c189fb564bab99f8244a9cdc05f31df39cd123c41a78fcef83ea8bb160a Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.794094 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-5gvmk"] Jan 27 12:27:31 crc kubenswrapper[4745]: W0127 12:27:31.802213 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6864b9ac_a4d6_46c5_b994_9710da668093.slice/crio-9121f280467355756d1084a9f350819d6cbe1637fefdcbee7cb8fd1a0c4f7d39 WatchSource:0}: Error finding container 9121f280467355756d1084a9f350819d6cbe1637fefdcbee7cb8fd1a0c4f7d39: Status 404 returned error can't find the container with id 9121f280467355756d1084a9f350819d6cbe1637fefdcbee7cb8fd1a0c4f7d39 Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.868977 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.927532 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc5sv"] Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.936011 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-dk5k8" event={"ID":"29a40d2d-f958-4b3a-ac04-0c817c5aa6ad","Type":"ContainerStarted","Data":"fae52c189fb564bab99f8244a9cdc05f31df39cd123c41a78fcef83ea8bb160a"} Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.937778 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5gvmk" event={"ID":"6864b9ac-a4d6-46c5-b994-9710da668093","Type":"ContainerStarted","Data":"9121f280467355756d1084a9f350819d6cbe1637fefdcbee7cb8fd1a0c4f7d39"} Jan 27 12:27:31 crc kubenswrapper[4745]: W0127 12:27:31.938144 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b7bf861_ac3a_4232_9783_6b7662b6c69b.slice/crio-45038a46def702ac3ab92c59e8ec559fe03239139924ed502638319def8bcfec WatchSource:0}: Error finding container 45038a46def702ac3ab92c59e8ec559fe03239139924ed502638319def8bcfec: Status 404 returned error can't find the container with id 45038a46def702ac3ab92c59e8ec559fe03239139924ed502638319def8bcfec Jan 27 12:27:31 crc kubenswrapper[4745]: I0127 12:27:31.939654 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5bm8w" event={"ID":"43ca3915-5425-4595-84b8-dd3c7fc696f3","Type":"ContainerStarted","Data":"d7cc2c84e5ab03a08c91cd20b08030c597a8bbf61bab2492cf99853d8ba5f042"} Jan 27 12:27:32 crc kubenswrapper[4745]: I0127 12:27:32.103967 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c94d9874c-gcq49"] Jan 27 12:27:32 crc kubenswrapper[4745]: W0127 12:27:32.112959 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod266257a2_93c6_48f4_baab_3fdf4505b44d.slice/crio-1943d5aa04ab729936b558b7f804e7682c53b3fe34740d68a1ab7cbeade39d1d WatchSource:0}: Error finding container 1943d5aa04ab729936b558b7f804e7682c53b3fe34740d68a1ab7cbeade39d1d: Status 404 returned error can't find the container with id 1943d5aa04ab729936b558b7f804e7682c53b3fe34740d68a1ab7cbeade39d1d Jan 27 12:27:32 crc kubenswrapper[4745]: I0127 12:27:32.951339 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c94d9874c-gcq49" event={"ID":"266257a2-93c6-48f4-baab-3fdf4505b44d","Type":"ContainerStarted","Data":"69bcd9098f814ff44df1d76151ec12953eb60daa5bc738132511b68a597e09fb"} Jan 27 12:27:32 crc kubenswrapper[4745]: I0127 12:27:32.951778 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c94d9874c-gcq49" event={"ID":"266257a2-93c6-48f4-baab-3fdf4505b44d","Type":"ContainerStarted","Data":"1943d5aa04ab729936b558b7f804e7682c53b3fe34740d68a1ab7cbeade39d1d"} Jan 27 12:27:32 crc kubenswrapper[4745]: I0127 12:27:32.955452 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc5sv" event={"ID":"7b7bf861-ac3a-4232-9783-6b7662b6c69b","Type":"ContainerStarted","Data":"45038a46def702ac3ab92c59e8ec559fe03239139924ed502638319def8bcfec"} Jan 27 12:27:32 crc kubenswrapper[4745]: I0127 12:27:32.975884 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7c94d9874c-gcq49" podStartSLOduration=1.975868725 podStartE2EDuration="1.975868725s" podCreationTimestamp="2026-01-27 12:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:27:32.975589036 +0000 UTC m=+945.780499724" watchObservedRunningTime="2026-01-27 12:27:32.975868725 +0000 UTC m=+945.780779413" Jan 27 12:27:41 crc kubenswrapper[4745]: I0127 12:27:41.869444 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:41 crc kubenswrapper[4745]: I0127 12:27:41.870093 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:41 crc kubenswrapper[4745]: I0127 12:27:41.875325 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:42 crc kubenswrapper[4745]: I0127 12:27:42.051312 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-dk5k8" event={"ID":"29a40d2d-f958-4b3a-ac04-0c817c5aa6ad","Type":"ContainerStarted","Data":"2da0b896c59b642599ff0b7530eff0f1fbdd6b7e196aa700cf08a0bca2aca83e"} Jan 27 12:27:42 crc kubenswrapper[4745]: I0127 12:27:42.053974 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5gvmk" event={"ID":"6864b9ac-a4d6-46c5-b994-9710da668093","Type":"ContainerStarted","Data":"d208d5025490576d22b7e4f7d52dcb3c7ba64d8f9e758ce27248abbf75446375"} Jan 27 12:27:42 crc kubenswrapper[4745]: I0127 12:27:42.054400 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5gvmk" Jan 27 12:27:42 crc kubenswrapper[4745]: I0127 12:27:42.055714 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5bm8w" event={"ID":"43ca3915-5425-4595-84b8-dd3c7fc696f3","Type":"ContainerStarted","Data":"2c1d16039b3b61fb9b0833d88d6b2b0af5316cdd1ec3302380d8eba6c567c42f"} Jan 27 12:27:42 crc kubenswrapper[4745]: I0127 12:27:42.055779 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-5bm8w" Jan 27 12:27:42 crc kubenswrapper[4745]: I0127 12:27:42.057662 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc5sv" event={"ID":"7b7bf861-ac3a-4232-9783-6b7662b6c69b","Type":"ContainerStarted","Data":"27afaede635a88d4b2425ccf81dff45b0ccee6c612059b23d9733fcf4fa114a8"} Jan 27 12:27:42 crc kubenswrapper[4745]: I0127 12:27:42.061249 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7c94d9874c-gcq49" Jan 27 12:27:42 crc kubenswrapper[4745]: I0127 12:27:42.072912 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5gvmk" podStartSLOduration=2.962767916 podStartE2EDuration="12.072773309s" podCreationTimestamp="2026-01-27 12:27:30 +0000 UTC" firstStartedPulling="2026-01-27 12:27:31.806236554 +0000 UTC m=+944.611147242" lastFinishedPulling="2026-01-27 12:27:40.916241947 +0000 UTC m=+953.721152635" observedRunningTime="2026-01-27 12:27:42.070182314 +0000 UTC m=+954.875093002" watchObservedRunningTime="2026-01-27 12:27:42.072773309 +0000 UTC m=+954.877683997" Jan 27 12:27:42 crc kubenswrapper[4745]: I0127 12:27:42.089933 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-5bm8w" podStartSLOduration=2.655887258 podStartE2EDuration="12.089916143s" podCreationTimestamp="2026-01-27 12:27:30 +0000 UTC" firstStartedPulling="2026-01-27 12:27:31.481066379 +0000 UTC m=+944.285977067" lastFinishedPulling="2026-01-27 12:27:40.915095264 +0000 UTC m=+953.720005952" observedRunningTime="2026-01-27 12:27:42.087903695 +0000 UTC m=+954.892814383" watchObservedRunningTime="2026-01-27 12:27:42.089916143 +0000 UTC m=+954.894826831" Jan 27 12:27:42 crc kubenswrapper[4745]: I0127 12:27:42.127239 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zc5sv" podStartSLOduration=2.15333059 podStartE2EDuration="11.127215639s" podCreationTimestamp="2026-01-27 12:27:31 +0000 UTC" firstStartedPulling="2026-01-27 12:27:31.943021187 +0000 UTC m=+944.747931875" lastFinishedPulling="2026-01-27 12:27:40.916906236 +0000 UTC m=+953.721816924" observedRunningTime="2026-01-27 12:27:42.121973558 +0000 UTC m=+954.926884246" watchObservedRunningTime="2026-01-27 12:27:42.127215639 +0000 UTC m=+954.932126327" Jan 27 12:27:42 crc kubenswrapper[4745]: I0127 12:27:42.138972 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zqrwf"] Jan 27 12:27:45 crc kubenswrapper[4745]: I0127 12:27:45.080603 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-dk5k8" event={"ID":"29a40d2d-f958-4b3a-ac04-0c817c5aa6ad","Type":"ContainerStarted","Data":"ba8fc738ec7e680cac19d20a66bde92eb759e4a57ea187b08d2df1487131ab7c"} Jan 27 12:27:45 crc kubenswrapper[4745]: I0127 12:27:45.094731 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-dk5k8" podStartSLOduration=2.239931677 podStartE2EDuration="15.094712022s" podCreationTimestamp="2026-01-27 12:27:30 +0000 UTC" firstStartedPulling="2026-01-27 12:27:31.793734803 +0000 UTC m=+944.598645491" lastFinishedPulling="2026-01-27 12:27:44.648515148 +0000 UTC m=+957.453425836" observedRunningTime="2026-01-27 12:27:45.093522448 +0000 UTC m=+957.898433146" watchObservedRunningTime="2026-01-27 12:27:45.094712022 +0000 UTC m=+957.899622720" Jan 27 12:27:46 crc kubenswrapper[4745]: I0127 12:27:46.376455 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-5bm8w" Jan 27 12:27:51 crc kubenswrapper[4745]: I0127 12:27:51.331015 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5gvmk" Jan 27 12:28:07 crc kubenswrapper[4745]: I0127 12:28:07.181982 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-zqrwf" podUID="94eb6425-bdf2-43d1-926e-c94700a985be" containerName="console" containerID="cri-o://056962067fbe8f5fd14aa3dc9656a74ceff87c88e96809c3b997c987eea3fe33" gracePeriod=15 Jan 27 12:28:07 crc kubenswrapper[4745]: I0127 12:28:07.316981 4745 patch_prober.go:28] interesting pod/console-f9d7485db-zqrwf container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 27 12:28:07 crc kubenswrapper[4745]: I0127 12:28:07.317320 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-zqrwf" podUID="94eb6425-bdf2-43d1-926e-c94700a985be" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 27 12:28:07 crc kubenswrapper[4745]: I0127 12:28:07.761317 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zqrwf_94eb6425-bdf2-43d1-926e-c94700a985be/console/0.log" Jan 27 12:28:07 crc kubenswrapper[4745]: I0127 12:28:07.761378 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:28:07 crc kubenswrapper[4745]: I0127 12:28:07.909657 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-service-ca\") pod \"94eb6425-bdf2-43d1-926e-c94700a985be\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " Jan 27 12:28:07 crc kubenswrapper[4745]: I0127 12:28:07.909784 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/94eb6425-bdf2-43d1-926e-c94700a985be-console-serving-cert\") pod \"94eb6425-bdf2-43d1-926e-c94700a985be\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " Jan 27 12:28:07 crc kubenswrapper[4745]: I0127 12:28:07.909844 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/94eb6425-bdf2-43d1-926e-c94700a985be-console-oauth-config\") pod \"94eb6425-bdf2-43d1-926e-c94700a985be\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " Jan 27 12:28:07 crc kubenswrapper[4745]: I0127 12:28:07.909871 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-trusted-ca-bundle\") pod \"94eb6425-bdf2-43d1-926e-c94700a985be\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " Jan 27 12:28:07 crc kubenswrapper[4745]: I0127 12:28:07.909899 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4hmn\" (UniqueName: \"kubernetes.io/projected/94eb6425-bdf2-43d1-926e-c94700a985be-kube-api-access-v4hmn\") pod \"94eb6425-bdf2-43d1-926e-c94700a985be\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " Jan 27 12:28:07 crc kubenswrapper[4745]: I0127 12:28:07.909950 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-oauth-serving-cert\") pod \"94eb6425-bdf2-43d1-926e-c94700a985be\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " Jan 27 12:28:07 crc kubenswrapper[4745]: I0127 12:28:07.909972 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-console-config\") pod \"94eb6425-bdf2-43d1-926e-c94700a985be\" (UID: \"94eb6425-bdf2-43d1-926e-c94700a985be\") " Jan 27 12:28:07 crc kubenswrapper[4745]: I0127 12:28:07.910404 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-service-ca" (OuterVolumeSpecName: "service-ca") pod "94eb6425-bdf2-43d1-926e-c94700a985be" (UID: "94eb6425-bdf2-43d1-926e-c94700a985be"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:28:07 crc kubenswrapper[4745]: I0127 12:28:07.910599 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-console-config" (OuterVolumeSpecName: "console-config") pod "94eb6425-bdf2-43d1-926e-c94700a985be" (UID: "94eb6425-bdf2-43d1-926e-c94700a985be"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:28:07 crc kubenswrapper[4745]: I0127 12:28:07.910886 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "94eb6425-bdf2-43d1-926e-c94700a985be" (UID: "94eb6425-bdf2-43d1-926e-c94700a985be"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:28:07 crc kubenswrapper[4745]: I0127 12:28:07.910979 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "94eb6425-bdf2-43d1-926e-c94700a985be" (UID: "94eb6425-bdf2-43d1-926e-c94700a985be"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:28:07 crc kubenswrapper[4745]: I0127 12:28:07.915432 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94eb6425-bdf2-43d1-926e-c94700a985be-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "94eb6425-bdf2-43d1-926e-c94700a985be" (UID: "94eb6425-bdf2-43d1-926e-c94700a985be"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:28:07 crc kubenswrapper[4745]: I0127 12:28:07.915912 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94eb6425-bdf2-43d1-926e-c94700a985be-kube-api-access-v4hmn" (OuterVolumeSpecName: "kube-api-access-v4hmn") pod "94eb6425-bdf2-43d1-926e-c94700a985be" (UID: "94eb6425-bdf2-43d1-926e-c94700a985be"). InnerVolumeSpecName "kube-api-access-v4hmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:28:07 crc kubenswrapper[4745]: I0127 12:28:07.919022 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94eb6425-bdf2-43d1-926e-c94700a985be-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "94eb6425-bdf2-43d1-926e-c94700a985be" (UID: "94eb6425-bdf2-43d1-926e-c94700a985be"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:28:08 crc kubenswrapper[4745]: I0127 12:28:08.011151 4745 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/94eb6425-bdf2-43d1-926e-c94700a985be-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:28:08 crc kubenswrapper[4745]: I0127 12:28:08.011198 4745 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/94eb6425-bdf2-43d1-926e-c94700a985be-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:28:08 crc kubenswrapper[4745]: I0127 12:28:08.011207 4745 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 12:28:08 crc kubenswrapper[4745]: I0127 12:28:08.011217 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4hmn\" (UniqueName: \"kubernetes.io/projected/94eb6425-bdf2-43d1-926e-c94700a985be-kube-api-access-v4hmn\") on node \"crc\" DevicePath \"\"" Jan 27 12:28:08 crc kubenswrapper[4745]: I0127 12:28:08.011228 4745 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 12:28:08 crc kubenswrapper[4745]: I0127 12:28:08.011235 4745 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 12:28:08 crc kubenswrapper[4745]: I0127 12:28:08.011244 4745 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/94eb6425-bdf2-43d1-926e-c94700a985be-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 12:28:08 crc kubenswrapper[4745]: I0127 12:28:08.217102 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zqrwf_94eb6425-bdf2-43d1-926e-c94700a985be/console/0.log" Jan 27 12:28:08 crc kubenswrapper[4745]: I0127 12:28:08.217417 4745 generic.go:334] "Generic (PLEG): container finished" podID="94eb6425-bdf2-43d1-926e-c94700a985be" containerID="056962067fbe8f5fd14aa3dc9656a74ceff87c88e96809c3b997c987eea3fe33" exitCode=2 Jan 27 12:28:08 crc kubenswrapper[4745]: I0127 12:28:08.217453 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zqrwf" event={"ID":"94eb6425-bdf2-43d1-926e-c94700a985be","Type":"ContainerDied","Data":"056962067fbe8f5fd14aa3dc9656a74ceff87c88e96809c3b997c987eea3fe33"} Jan 27 12:28:08 crc kubenswrapper[4745]: I0127 12:28:08.217468 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zqrwf" Jan 27 12:28:08 crc kubenswrapper[4745]: I0127 12:28:08.217478 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zqrwf" event={"ID":"94eb6425-bdf2-43d1-926e-c94700a985be","Type":"ContainerDied","Data":"db5bcc70db46110ef9414198f8ac0e6c653383151c993abb3b6e5eeec72cd67c"} Jan 27 12:28:08 crc kubenswrapper[4745]: I0127 12:28:08.217494 4745 scope.go:117] "RemoveContainer" containerID="056962067fbe8f5fd14aa3dc9656a74ceff87c88e96809c3b997c987eea3fe33" Jan 27 12:28:08 crc kubenswrapper[4745]: I0127 12:28:08.232242 4745 scope.go:117] "RemoveContainer" containerID="056962067fbe8f5fd14aa3dc9656a74ceff87c88e96809c3b997c987eea3fe33" Jan 27 12:28:08 crc kubenswrapper[4745]: E0127 12:28:08.232715 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"056962067fbe8f5fd14aa3dc9656a74ceff87c88e96809c3b997c987eea3fe33\": container with ID starting with 056962067fbe8f5fd14aa3dc9656a74ceff87c88e96809c3b997c987eea3fe33 not found: ID does not exist" containerID="056962067fbe8f5fd14aa3dc9656a74ceff87c88e96809c3b997c987eea3fe33" Jan 27 12:28:08 crc kubenswrapper[4745]: I0127 12:28:08.232749 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"056962067fbe8f5fd14aa3dc9656a74ceff87c88e96809c3b997c987eea3fe33"} err="failed to get container status \"056962067fbe8f5fd14aa3dc9656a74ceff87c88e96809c3b997c987eea3fe33\": rpc error: code = NotFound desc = could not find container \"056962067fbe8f5fd14aa3dc9656a74ceff87c88e96809c3b997c987eea3fe33\": container with ID starting with 056962067fbe8f5fd14aa3dc9656a74ceff87c88e96809c3b997c987eea3fe33 not found: ID does not exist" Jan 27 12:28:08 crc kubenswrapper[4745]: I0127 12:28:08.236568 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zqrwf"] Jan 27 12:28:08 crc kubenswrapper[4745]: I0127 12:28:08.240797 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-zqrwf"] Jan 27 12:28:09 crc kubenswrapper[4745]: I0127 12:28:09.121864 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb"] Jan 27 12:28:09 crc kubenswrapper[4745]: E0127 12:28:09.122780 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94eb6425-bdf2-43d1-926e-c94700a985be" containerName="console" Jan 27 12:28:09 crc kubenswrapper[4745]: I0127 12:28:09.122925 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="94eb6425-bdf2-43d1-926e-c94700a985be" containerName="console" Jan 27 12:28:09 crc kubenswrapper[4745]: I0127 12:28:09.123978 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="94eb6425-bdf2-43d1-926e-c94700a985be" containerName="console" Jan 27 12:28:09 crc kubenswrapper[4745]: I0127 12:28:09.139289 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb" Jan 27 12:28:09 crc kubenswrapper[4745]: I0127 12:28:09.143877 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 12:28:09 crc kubenswrapper[4745]: I0127 12:28:09.145348 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb"] Jan 27 12:28:09 crc kubenswrapper[4745]: I0127 12:28:09.231757 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb\" (UID: \"45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb" Jan 27 12:28:09 crc kubenswrapper[4745]: I0127 12:28:09.231851 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl2tx\" (UniqueName: \"kubernetes.io/projected/45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20-kube-api-access-kl2tx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb\" (UID: \"45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb" Jan 27 12:28:09 crc kubenswrapper[4745]: I0127 12:28:09.231888 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb\" (UID: \"45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb" Jan 27 12:28:09 crc kubenswrapper[4745]: I0127 12:28:09.333277 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl2tx\" (UniqueName: \"kubernetes.io/projected/45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20-kube-api-access-kl2tx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb\" (UID: \"45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb" Jan 27 12:28:09 crc kubenswrapper[4745]: I0127 12:28:09.333326 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb\" (UID: \"45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb" Jan 27 12:28:09 crc kubenswrapper[4745]: I0127 12:28:09.333428 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb\" (UID: \"45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb" Jan 27 12:28:09 crc kubenswrapper[4745]: I0127 12:28:09.334055 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb\" (UID: \"45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb" Jan 27 12:28:09 crc kubenswrapper[4745]: I0127 12:28:09.334340 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb\" (UID: \"45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb" Jan 27 12:28:09 crc kubenswrapper[4745]: I0127 12:28:09.353780 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl2tx\" (UniqueName: \"kubernetes.io/projected/45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20-kube-api-access-kl2tx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb\" (UID: \"45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb" Jan 27 12:28:09 crc kubenswrapper[4745]: I0127 12:28:09.459094 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb" Jan 27 12:28:09 crc kubenswrapper[4745]: I0127 12:28:09.995538 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb"] Jan 27 12:28:10 crc kubenswrapper[4745]: I0127 12:28:10.081043 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94eb6425-bdf2-43d1-926e-c94700a985be" path="/var/lib/kubelet/pods/94eb6425-bdf2-43d1-926e-c94700a985be/volumes" Jan 27 12:28:10 crc kubenswrapper[4745]: I0127 12:28:10.233312 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb" event={"ID":"45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20","Type":"ContainerStarted","Data":"a471e38352458cd582aca43b8af4426af6e6c78f7eb462583afe39d8c9df93ca"} Jan 27 12:28:11 crc kubenswrapper[4745]: I0127 12:28:11.241312 4745 generic.go:334] "Generic (PLEG): container finished" podID="45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20" containerID="da6ef175edfa128271033eebde1cd3e9f403f63a1027981f5b3d1e8296743458" exitCode=0 Jan 27 12:28:11 crc kubenswrapper[4745]: I0127 12:28:11.241356 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb" event={"ID":"45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20","Type":"ContainerDied","Data":"da6ef175edfa128271033eebde1cd3e9f403f63a1027981f5b3d1e8296743458"} Jan 27 12:28:15 crc kubenswrapper[4745]: I0127 12:28:15.274087 4745 generic.go:334] "Generic (PLEG): container finished" podID="45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20" containerID="d0dcd517220b3d01a1a354d41d9cfc7ee3339f1d1da807c42fe80e93b3b43e6d" exitCode=0 Jan 27 12:28:15 crc kubenswrapper[4745]: I0127 12:28:15.274196 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb" event={"ID":"45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20","Type":"ContainerDied","Data":"d0dcd517220b3d01a1a354d41d9cfc7ee3339f1d1da807c42fe80e93b3b43e6d"} Jan 27 12:28:16 crc kubenswrapper[4745]: I0127 12:28:16.281841 4745 generic.go:334] "Generic (PLEG): container finished" podID="45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20" containerID="8dd2fe235c727c4af704d027875e1b4e2e5e5c6bb4521ec37540dbea038fe6b1" exitCode=0 Jan 27 12:28:16 crc kubenswrapper[4745]: I0127 12:28:16.282016 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb" event={"ID":"45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20","Type":"ContainerDied","Data":"8dd2fe235c727c4af704d027875e1b4e2e5e5c6bb4521ec37540dbea038fe6b1"} Jan 27 12:28:17 crc kubenswrapper[4745]: I0127 12:28:17.545504 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb" Jan 27 12:28:17 crc kubenswrapper[4745]: I0127 12:28:17.650632 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20-bundle\") pod \"45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20\" (UID: \"45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20\") " Jan 27 12:28:17 crc kubenswrapper[4745]: I0127 12:28:17.650781 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl2tx\" (UniqueName: \"kubernetes.io/projected/45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20-kube-api-access-kl2tx\") pod \"45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20\" (UID: \"45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20\") " Jan 27 12:28:17 crc kubenswrapper[4745]: I0127 12:28:17.650880 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20-util\") pod \"45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20\" (UID: \"45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20\") " Jan 27 12:28:17 crc kubenswrapper[4745]: I0127 12:28:17.651879 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20-bundle" (OuterVolumeSpecName: "bundle") pod "45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20" (UID: "45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:28:17 crc kubenswrapper[4745]: I0127 12:28:17.653618 4745 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 12:28:17 crc kubenswrapper[4745]: I0127 12:28:17.657181 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20-kube-api-access-kl2tx" (OuterVolumeSpecName: "kube-api-access-kl2tx") pod "45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20" (UID: "45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20"). InnerVolumeSpecName "kube-api-access-kl2tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:28:17 crc kubenswrapper[4745]: I0127 12:28:17.660765 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20-util" (OuterVolumeSpecName: "util") pod "45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20" (UID: "45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:28:17 crc kubenswrapper[4745]: I0127 12:28:17.755091 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl2tx\" (UniqueName: \"kubernetes.io/projected/45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20-kube-api-access-kl2tx\") on node \"crc\" DevicePath \"\"" Jan 27 12:28:17 crc kubenswrapper[4745]: I0127 12:28:17.755142 4745 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20-util\") on node \"crc\" DevicePath \"\"" Jan 27 12:28:18 crc kubenswrapper[4745]: I0127 12:28:18.302095 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb" event={"ID":"45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20","Type":"ContainerDied","Data":"a471e38352458cd582aca43b8af4426af6e6c78f7eb462583afe39d8c9df93ca"} Jan 27 12:28:18 crc kubenswrapper[4745]: I0127 12:28:18.302151 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a471e38352458cd582aca43b8af4426af6e6c78f7eb462583afe39d8c9df93ca" Jan 27 12:28:18 crc kubenswrapper[4745]: I0127 12:28:18.302353 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb" Jan 27 12:28:20 crc kubenswrapper[4745]: I0127 12:28:20.476943 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x7hjq"] Jan 27 12:28:20 crc kubenswrapper[4745]: E0127 12:28:20.477538 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20" containerName="extract" Jan 27 12:28:20 crc kubenswrapper[4745]: I0127 12:28:20.477553 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20" containerName="extract" Jan 27 12:28:20 crc kubenswrapper[4745]: E0127 12:28:20.477578 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20" containerName="pull" Jan 27 12:28:20 crc kubenswrapper[4745]: I0127 12:28:20.477588 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20" containerName="pull" Jan 27 12:28:20 crc kubenswrapper[4745]: E0127 12:28:20.477605 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20" containerName="util" Jan 27 12:28:20 crc kubenswrapper[4745]: I0127 12:28:20.477613 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20" containerName="util" Jan 27 12:28:20 crc kubenswrapper[4745]: I0127 12:28:20.477745 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20" containerName="extract" Jan 27 12:28:20 crc kubenswrapper[4745]: I0127 12:28:20.478729 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7hjq" Jan 27 12:28:20 crc kubenswrapper[4745]: I0127 12:28:20.502357 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x7hjq"] Jan 27 12:28:20 crc kubenswrapper[4745]: I0127 12:28:20.595647 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05c02727-3b79-4744-85e8-492aaa86ec26-utilities\") pod \"community-operators-x7hjq\" (UID: \"05c02727-3b79-4744-85e8-492aaa86ec26\") " pod="openshift-marketplace/community-operators-x7hjq" Jan 27 12:28:20 crc kubenswrapper[4745]: I0127 12:28:20.595685 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05c02727-3b79-4744-85e8-492aaa86ec26-catalog-content\") pod \"community-operators-x7hjq\" (UID: \"05c02727-3b79-4744-85e8-492aaa86ec26\") " pod="openshift-marketplace/community-operators-x7hjq" Jan 27 12:28:20 crc kubenswrapper[4745]: I0127 12:28:20.595709 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brnzs\" (UniqueName: \"kubernetes.io/projected/05c02727-3b79-4744-85e8-492aaa86ec26-kube-api-access-brnzs\") pod \"community-operators-x7hjq\" (UID: \"05c02727-3b79-4744-85e8-492aaa86ec26\") " pod="openshift-marketplace/community-operators-x7hjq" Jan 27 12:28:20 crc kubenswrapper[4745]: I0127 12:28:20.696765 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05c02727-3b79-4744-85e8-492aaa86ec26-utilities\") pod \"community-operators-x7hjq\" (UID: \"05c02727-3b79-4744-85e8-492aaa86ec26\") " pod="openshift-marketplace/community-operators-x7hjq" Jan 27 12:28:20 crc kubenswrapper[4745]: I0127 12:28:20.696845 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05c02727-3b79-4744-85e8-492aaa86ec26-catalog-content\") pod \"community-operators-x7hjq\" (UID: \"05c02727-3b79-4744-85e8-492aaa86ec26\") " pod="openshift-marketplace/community-operators-x7hjq" Jan 27 12:28:20 crc kubenswrapper[4745]: I0127 12:28:20.696881 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brnzs\" (UniqueName: \"kubernetes.io/projected/05c02727-3b79-4744-85e8-492aaa86ec26-kube-api-access-brnzs\") pod \"community-operators-x7hjq\" (UID: \"05c02727-3b79-4744-85e8-492aaa86ec26\") " pod="openshift-marketplace/community-operators-x7hjq" Jan 27 12:28:20 crc kubenswrapper[4745]: I0127 12:28:20.697605 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05c02727-3b79-4744-85e8-492aaa86ec26-utilities\") pod \"community-operators-x7hjq\" (UID: \"05c02727-3b79-4744-85e8-492aaa86ec26\") " pod="openshift-marketplace/community-operators-x7hjq" Jan 27 12:28:20 crc kubenswrapper[4745]: I0127 12:28:20.697664 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05c02727-3b79-4744-85e8-492aaa86ec26-catalog-content\") pod \"community-operators-x7hjq\" (UID: \"05c02727-3b79-4744-85e8-492aaa86ec26\") " pod="openshift-marketplace/community-operators-x7hjq" Jan 27 12:28:20 crc kubenswrapper[4745]: I0127 12:28:20.714878 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brnzs\" (UniqueName: \"kubernetes.io/projected/05c02727-3b79-4744-85e8-492aaa86ec26-kube-api-access-brnzs\") pod \"community-operators-x7hjq\" (UID: \"05c02727-3b79-4744-85e8-492aaa86ec26\") " pod="openshift-marketplace/community-operators-x7hjq" Jan 27 12:28:20 crc kubenswrapper[4745]: I0127 12:28:20.807511 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7hjq" Jan 27 12:28:21 crc kubenswrapper[4745]: I0127 12:28:21.034730 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x7hjq"] Jan 27 12:28:21 crc kubenswrapper[4745]: I0127 12:28:21.321761 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7hjq" event={"ID":"05c02727-3b79-4744-85e8-492aaa86ec26","Type":"ContainerStarted","Data":"07de6e2ea781093c47055e2183743b31e69144e9f7a87d7953cc99520ba2ac4d"} Jan 27 12:28:22 crc kubenswrapper[4745]: I0127 12:28:22.330541 4745 generic.go:334] "Generic (PLEG): container finished" podID="05c02727-3b79-4744-85e8-492aaa86ec26" containerID="19ab932cbbc93f727712c7fae88cd4a86a601be5ae3346d862f20d18e71c532e" exitCode=0 Jan 27 12:28:22 crc kubenswrapper[4745]: I0127 12:28:22.330718 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7hjq" event={"ID":"05c02727-3b79-4744-85e8-492aaa86ec26","Type":"ContainerDied","Data":"19ab932cbbc93f727712c7fae88cd4a86a601be5ae3346d862f20d18e71c532e"} Jan 27 12:28:26 crc kubenswrapper[4745]: I0127 12:28:26.385840 4745 generic.go:334] "Generic (PLEG): container finished" podID="05c02727-3b79-4744-85e8-492aaa86ec26" containerID="cdf44af7c863f2e18b9eec1e5578382bd32c7218c030f233e87fd3a292e3cc66" exitCode=0 Jan 27 12:28:26 crc kubenswrapper[4745]: I0127 12:28:26.386171 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7hjq" event={"ID":"05c02727-3b79-4744-85e8-492aaa86ec26","Type":"ContainerDied","Data":"cdf44af7c863f2e18b9eec1e5578382bd32c7218c030f233e87fd3a292e3cc66"} Jan 27 12:28:28 crc kubenswrapper[4745]: I0127 12:28:28.399171 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7hjq" event={"ID":"05c02727-3b79-4744-85e8-492aaa86ec26","Type":"ContainerStarted","Data":"ff592c6cfb9b5c05e00cd9e09644b08b2e9d65d88e23b7bf602695ab2a88b18b"} Jan 27 12:28:28 crc kubenswrapper[4745]: I0127 12:28:28.415647 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x7hjq" podStartSLOduration=3.211236018 podStartE2EDuration="8.415625194s" podCreationTimestamp="2026-01-27 12:28:20 +0000 UTC" firstStartedPulling="2026-01-27 12:28:22.332686628 +0000 UTC m=+995.137597326" lastFinishedPulling="2026-01-27 12:28:27.537075804 +0000 UTC m=+1000.341986502" observedRunningTime="2026-01-27 12:28:28.413692718 +0000 UTC m=+1001.218603416" watchObservedRunningTime="2026-01-27 12:28:28.415625194 +0000 UTC m=+1001.220535872" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.161559 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-69c98499d8-74brb"] Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.162257 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-69c98499d8-74brb" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.178209 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.178946 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.179111 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.179253 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.180018 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-gtg72" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.190713 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-69c98499d8-74brb"] Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.259787 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f1273171-d32f-4231-85d4-9c949800ca10-webhook-cert\") pod \"metallb-operator-controller-manager-69c98499d8-74brb\" (UID: \"f1273171-d32f-4231-85d4-9c949800ca10\") " pod="metallb-system/metallb-operator-controller-manager-69c98499d8-74brb" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.259931 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f1273171-d32f-4231-85d4-9c949800ca10-apiservice-cert\") pod \"metallb-operator-controller-manager-69c98499d8-74brb\" (UID: \"f1273171-d32f-4231-85d4-9c949800ca10\") " pod="metallb-system/metallb-operator-controller-manager-69c98499d8-74brb" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.259973 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl9md\" (UniqueName: \"kubernetes.io/projected/f1273171-d32f-4231-85d4-9c949800ca10-kube-api-access-bl9md\") pod \"metallb-operator-controller-manager-69c98499d8-74brb\" (UID: \"f1273171-d32f-4231-85d4-9c949800ca10\") " pod="metallb-system/metallb-operator-controller-manager-69c98499d8-74brb" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.361344 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl9md\" (UniqueName: \"kubernetes.io/projected/f1273171-d32f-4231-85d4-9c949800ca10-kube-api-access-bl9md\") pod \"metallb-operator-controller-manager-69c98499d8-74brb\" (UID: \"f1273171-d32f-4231-85d4-9c949800ca10\") " pod="metallb-system/metallb-operator-controller-manager-69c98499d8-74brb" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.361452 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f1273171-d32f-4231-85d4-9c949800ca10-webhook-cert\") pod \"metallb-operator-controller-manager-69c98499d8-74brb\" (UID: \"f1273171-d32f-4231-85d4-9c949800ca10\") " pod="metallb-system/metallb-operator-controller-manager-69c98499d8-74brb" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.361508 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f1273171-d32f-4231-85d4-9c949800ca10-apiservice-cert\") pod \"metallb-operator-controller-manager-69c98499d8-74brb\" (UID: \"f1273171-d32f-4231-85d4-9c949800ca10\") " pod="metallb-system/metallb-operator-controller-manager-69c98499d8-74brb" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.367148 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f1273171-d32f-4231-85d4-9c949800ca10-webhook-cert\") pod \"metallb-operator-controller-manager-69c98499d8-74brb\" (UID: \"f1273171-d32f-4231-85d4-9c949800ca10\") " pod="metallb-system/metallb-operator-controller-manager-69c98499d8-74brb" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.367414 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f1273171-d32f-4231-85d4-9c949800ca10-apiservice-cert\") pod \"metallb-operator-controller-manager-69c98499d8-74brb\" (UID: \"f1273171-d32f-4231-85d4-9c949800ca10\") " pod="metallb-system/metallb-operator-controller-manager-69c98499d8-74brb" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.384661 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl9md\" (UniqueName: \"kubernetes.io/projected/f1273171-d32f-4231-85d4-9c949800ca10-kube-api-access-bl9md\") pod \"metallb-operator-controller-manager-69c98499d8-74brb\" (UID: \"f1273171-d32f-4231-85d4-9c949800ca10\") " pod="metallb-system/metallb-operator-controller-manager-69c98499d8-74brb" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.478562 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-69c98499d8-74brb" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.487670 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-66f8559b6f-b4zgv"] Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.488899 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-66f8559b6f-b4zgv" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.493047 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.493439 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-z7phf" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.494051 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.511769 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-66f8559b6f-b4zgv"] Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.564581 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/340aa282-d1b7-4386-a768-63ee67934411-webhook-cert\") pod \"metallb-operator-webhook-server-66f8559b6f-b4zgv\" (UID: \"340aa282-d1b7-4386-a768-63ee67934411\") " pod="metallb-system/metallb-operator-webhook-server-66f8559b6f-b4zgv" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.564673 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtkbc\" (UniqueName: \"kubernetes.io/projected/340aa282-d1b7-4386-a768-63ee67934411-kube-api-access-vtkbc\") pod \"metallb-operator-webhook-server-66f8559b6f-b4zgv\" (UID: \"340aa282-d1b7-4386-a768-63ee67934411\") " pod="metallb-system/metallb-operator-webhook-server-66f8559b6f-b4zgv" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.564711 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/340aa282-d1b7-4386-a768-63ee67934411-apiservice-cert\") pod \"metallb-operator-webhook-server-66f8559b6f-b4zgv\" (UID: \"340aa282-d1b7-4386-a768-63ee67934411\") " pod="metallb-system/metallb-operator-webhook-server-66f8559b6f-b4zgv" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.665753 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtkbc\" (UniqueName: \"kubernetes.io/projected/340aa282-d1b7-4386-a768-63ee67934411-kube-api-access-vtkbc\") pod \"metallb-operator-webhook-server-66f8559b6f-b4zgv\" (UID: \"340aa282-d1b7-4386-a768-63ee67934411\") " pod="metallb-system/metallb-operator-webhook-server-66f8559b6f-b4zgv" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.666078 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/340aa282-d1b7-4386-a768-63ee67934411-apiservice-cert\") pod \"metallb-operator-webhook-server-66f8559b6f-b4zgv\" (UID: \"340aa282-d1b7-4386-a768-63ee67934411\") " pod="metallb-system/metallb-operator-webhook-server-66f8559b6f-b4zgv" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.666122 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/340aa282-d1b7-4386-a768-63ee67934411-webhook-cert\") pod \"metallb-operator-webhook-server-66f8559b6f-b4zgv\" (UID: \"340aa282-d1b7-4386-a768-63ee67934411\") " pod="metallb-system/metallb-operator-webhook-server-66f8559b6f-b4zgv" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.671586 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/340aa282-d1b7-4386-a768-63ee67934411-apiservice-cert\") pod \"metallb-operator-webhook-server-66f8559b6f-b4zgv\" (UID: \"340aa282-d1b7-4386-a768-63ee67934411\") " pod="metallb-system/metallb-operator-webhook-server-66f8559b6f-b4zgv" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.686642 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/340aa282-d1b7-4386-a768-63ee67934411-webhook-cert\") pod \"metallb-operator-webhook-server-66f8559b6f-b4zgv\" (UID: \"340aa282-d1b7-4386-a768-63ee67934411\") " pod="metallb-system/metallb-operator-webhook-server-66f8559b6f-b4zgv" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.690500 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtkbc\" (UniqueName: \"kubernetes.io/projected/340aa282-d1b7-4386-a768-63ee67934411-kube-api-access-vtkbc\") pod \"metallb-operator-webhook-server-66f8559b6f-b4zgv\" (UID: \"340aa282-d1b7-4386-a768-63ee67934411\") " pod="metallb-system/metallb-operator-webhook-server-66f8559b6f-b4zgv" Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.810380 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-69c98499d8-74brb"] Jan 27 12:28:29 crc kubenswrapper[4745]: W0127 12:28:29.822581 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1273171_d32f_4231_85d4_9c949800ca10.slice/crio-143980db0037201dd028d2eb9aff13954ecc789152c2a4a61b9b217a886eefc1 WatchSource:0}: Error finding container 143980db0037201dd028d2eb9aff13954ecc789152c2a4a61b9b217a886eefc1: Status 404 returned error can't find the container with id 143980db0037201dd028d2eb9aff13954ecc789152c2a4a61b9b217a886eefc1 Jan 27 12:28:29 crc kubenswrapper[4745]: I0127 12:28:29.856092 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-66f8559b6f-b4zgv" Jan 27 12:28:30 crc kubenswrapper[4745]: I0127 12:28:30.272053 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-66f8559b6f-b4zgv"] Jan 27 12:28:30 crc kubenswrapper[4745]: I0127 12:28:30.418361 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-69c98499d8-74brb" event={"ID":"f1273171-d32f-4231-85d4-9c949800ca10","Type":"ContainerStarted","Data":"143980db0037201dd028d2eb9aff13954ecc789152c2a4a61b9b217a886eefc1"} Jan 27 12:28:30 crc kubenswrapper[4745]: I0127 12:28:30.420942 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-66f8559b6f-b4zgv" event={"ID":"340aa282-d1b7-4386-a768-63ee67934411","Type":"ContainerStarted","Data":"e34e85fbcdcb28330cf3b48642429a957b3ae3937bc8343b3995ef9e2c2ca9ed"} Jan 27 12:28:30 crc kubenswrapper[4745]: I0127 12:28:30.808579 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x7hjq" Jan 27 12:28:30 crc kubenswrapper[4745]: I0127 12:28:30.808630 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x7hjq" Jan 27 12:28:30 crc kubenswrapper[4745]: I0127 12:28:30.854851 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x7hjq" Jan 27 12:28:40 crc kubenswrapper[4745]: I0127 12:28:40.868932 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x7hjq" Jan 27 12:28:43 crc kubenswrapper[4745]: I0127 12:28:43.453639 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x7hjq"] Jan 27 12:28:43 crc kubenswrapper[4745]: I0127 12:28:43.453932 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x7hjq" podUID="05c02727-3b79-4744-85e8-492aaa86ec26" containerName="registry-server" containerID="cri-o://ff592c6cfb9b5c05e00cd9e09644b08b2e9d65d88e23b7bf602695ab2a88b18b" gracePeriod=2 Jan 27 12:28:43 crc kubenswrapper[4745]: I0127 12:28:43.632348 4745 generic.go:334] "Generic (PLEG): container finished" podID="05c02727-3b79-4744-85e8-492aaa86ec26" containerID="ff592c6cfb9b5c05e00cd9e09644b08b2e9d65d88e23b7bf602695ab2a88b18b" exitCode=0 Jan 27 12:28:43 crc kubenswrapper[4745]: I0127 12:28:43.632386 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7hjq" event={"ID":"05c02727-3b79-4744-85e8-492aaa86ec26","Type":"ContainerDied","Data":"ff592c6cfb9b5c05e00cd9e09644b08b2e9d65d88e23b7bf602695ab2a88b18b"} Jan 27 12:28:44 crc kubenswrapper[4745]: I0127 12:28:44.815902 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7hjq" Jan 27 12:28:44 crc kubenswrapper[4745]: I0127 12:28:44.906624 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brnzs\" (UniqueName: \"kubernetes.io/projected/05c02727-3b79-4744-85e8-492aaa86ec26-kube-api-access-brnzs\") pod \"05c02727-3b79-4744-85e8-492aaa86ec26\" (UID: \"05c02727-3b79-4744-85e8-492aaa86ec26\") " Jan 27 12:28:44 crc kubenswrapper[4745]: I0127 12:28:44.906685 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05c02727-3b79-4744-85e8-492aaa86ec26-utilities\") pod \"05c02727-3b79-4744-85e8-492aaa86ec26\" (UID: \"05c02727-3b79-4744-85e8-492aaa86ec26\") " Jan 27 12:28:44 crc kubenswrapper[4745]: I0127 12:28:44.906838 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05c02727-3b79-4744-85e8-492aaa86ec26-catalog-content\") pod \"05c02727-3b79-4744-85e8-492aaa86ec26\" (UID: \"05c02727-3b79-4744-85e8-492aaa86ec26\") " Jan 27 12:28:44 crc kubenswrapper[4745]: I0127 12:28:44.910382 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05c02727-3b79-4744-85e8-492aaa86ec26-utilities" (OuterVolumeSpecName: "utilities") pod "05c02727-3b79-4744-85e8-492aaa86ec26" (UID: "05c02727-3b79-4744-85e8-492aaa86ec26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:28:44 crc kubenswrapper[4745]: I0127 12:28:44.923666 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05c02727-3b79-4744-85e8-492aaa86ec26-kube-api-access-brnzs" (OuterVolumeSpecName: "kube-api-access-brnzs") pod "05c02727-3b79-4744-85e8-492aaa86ec26" (UID: "05c02727-3b79-4744-85e8-492aaa86ec26"). InnerVolumeSpecName "kube-api-access-brnzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:28:44 crc kubenswrapper[4745]: I0127 12:28:44.980973 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05c02727-3b79-4744-85e8-492aaa86ec26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05c02727-3b79-4744-85e8-492aaa86ec26" (UID: "05c02727-3b79-4744-85e8-492aaa86ec26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:28:45 crc kubenswrapper[4745]: I0127 12:28:45.008947 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05c02727-3b79-4744-85e8-492aaa86ec26-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:28:45 crc kubenswrapper[4745]: I0127 12:28:45.008992 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brnzs\" (UniqueName: \"kubernetes.io/projected/05c02727-3b79-4744-85e8-492aaa86ec26-kube-api-access-brnzs\") on node \"crc\" DevicePath \"\"" Jan 27 12:28:45 crc kubenswrapper[4745]: I0127 12:28:45.009009 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05c02727-3b79-4744-85e8-492aaa86ec26-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:28:45 crc kubenswrapper[4745]: I0127 12:28:45.655010 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-69c98499d8-74brb" event={"ID":"f1273171-d32f-4231-85d4-9c949800ca10","Type":"ContainerStarted","Data":"aea9706c83a57c3486b9e0fd8946bb96492a4fa5645d07bbe5396a30507e4caf"} Jan 27 12:28:45 crc kubenswrapper[4745]: I0127 12:28:45.655524 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-69c98499d8-74brb" Jan 27 12:28:45 crc kubenswrapper[4745]: I0127 12:28:45.657554 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-66f8559b6f-b4zgv" event={"ID":"340aa282-d1b7-4386-a768-63ee67934411","Type":"ContainerStarted","Data":"d429fda0e0c33df18e1c343b73c54dfd8c1231aa66cb7b324e006836c31646f1"} Jan 27 12:28:45 crc kubenswrapper[4745]: I0127 12:28:45.661458 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7hjq" event={"ID":"05c02727-3b79-4744-85e8-492aaa86ec26","Type":"ContainerDied","Data":"07de6e2ea781093c47055e2183743b31e69144e9f7a87d7953cc99520ba2ac4d"} Jan 27 12:28:45 crc kubenswrapper[4745]: I0127 12:28:45.661510 4745 scope.go:117] "RemoveContainer" containerID="ff592c6cfb9b5c05e00cd9e09644b08b2e9d65d88e23b7bf602695ab2a88b18b" Jan 27 12:28:45 crc kubenswrapper[4745]: I0127 12:28:45.661743 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7hjq" Jan 27 12:28:45 crc kubenswrapper[4745]: I0127 12:28:45.684757 4745 scope.go:117] "RemoveContainer" containerID="cdf44af7c863f2e18b9eec1e5578382bd32c7218c030f233e87fd3a292e3cc66" Jan 27 12:28:45 crc kubenswrapper[4745]: I0127 12:28:45.686172 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-69c98499d8-74brb" podStartSLOduration=1.9381101360000001 podStartE2EDuration="16.68615909s" podCreationTimestamp="2026-01-27 12:28:29 +0000 UTC" firstStartedPulling="2026-01-27 12:28:29.825700829 +0000 UTC m=+1002.630611517" lastFinishedPulling="2026-01-27 12:28:44.573749783 +0000 UTC m=+1017.378660471" observedRunningTime="2026-01-27 12:28:45.685633515 +0000 UTC m=+1018.490544203" watchObservedRunningTime="2026-01-27 12:28:45.68615909 +0000 UTC m=+1018.491069778" Jan 27 12:28:45 crc kubenswrapper[4745]: I0127 12:28:45.702499 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x7hjq"] Jan 27 12:28:45 crc kubenswrapper[4745]: I0127 12:28:45.704583 4745 scope.go:117] "RemoveContainer" containerID="19ab932cbbc93f727712c7fae88cd4a86a601be5ae3346d862f20d18e71c532e" Jan 27 12:28:45 crc kubenswrapper[4745]: I0127 12:28:45.711217 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x7hjq"] Jan 27 12:28:45 crc kubenswrapper[4745]: I0127 12:28:45.740075 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-66f8559b6f-b4zgv" podStartSLOduration=2.431475371 podStartE2EDuration="16.740055155s" podCreationTimestamp="2026-01-27 12:28:29 +0000 UTC" firstStartedPulling="2026-01-27 12:28:30.288713598 +0000 UTC m=+1003.093624286" lastFinishedPulling="2026-01-27 12:28:44.597293382 +0000 UTC m=+1017.402204070" observedRunningTime="2026-01-27 12:28:45.735062301 +0000 UTC m=+1018.539972989" watchObservedRunningTime="2026-01-27 12:28:45.740055155 +0000 UTC m=+1018.544965843" Jan 27 12:28:46 crc kubenswrapper[4745]: I0127 12:28:46.080584 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05c02727-3b79-4744-85e8-492aaa86ec26" path="/var/lib/kubelet/pods/05c02727-3b79-4744-85e8-492aaa86ec26/volumes" Jan 27 12:28:46 crc kubenswrapper[4745]: I0127 12:28:46.668416 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-66f8559b6f-b4zgv" Jan 27 12:28:59 crc kubenswrapper[4745]: I0127 12:28:59.861518 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-66f8559b6f-b4zgv" Jan 27 12:29:00 crc kubenswrapper[4745]: I0127 12:29:00.864785 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tz9vj"] Jan 27 12:29:00 crc kubenswrapper[4745]: E0127 12:29:00.865439 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c02727-3b79-4744-85e8-492aaa86ec26" containerName="extract-utilities" Jan 27 12:29:00 crc kubenswrapper[4745]: I0127 12:29:00.865468 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c02727-3b79-4744-85e8-492aaa86ec26" containerName="extract-utilities" Jan 27 12:29:00 crc kubenswrapper[4745]: E0127 12:29:00.865490 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c02727-3b79-4744-85e8-492aaa86ec26" containerName="registry-server" Jan 27 12:29:00 crc kubenswrapper[4745]: I0127 12:29:00.865506 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c02727-3b79-4744-85e8-492aaa86ec26" containerName="registry-server" Jan 27 12:29:00 crc kubenswrapper[4745]: E0127 12:29:00.865520 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c02727-3b79-4744-85e8-492aaa86ec26" containerName="extract-content" Jan 27 12:29:00 crc kubenswrapper[4745]: I0127 12:29:00.865528 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c02727-3b79-4744-85e8-492aaa86ec26" containerName="extract-content" Jan 27 12:29:00 crc kubenswrapper[4745]: I0127 12:29:00.865673 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c02727-3b79-4744-85e8-492aaa86ec26" containerName="registry-server" Jan 27 12:29:00 crc kubenswrapper[4745]: I0127 12:29:00.866644 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tz9vj" Jan 27 12:29:00 crc kubenswrapper[4745]: I0127 12:29:00.883689 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tz9vj"] Jan 27 12:29:00 crc kubenswrapper[4745]: I0127 12:29:00.969537 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx85c\" (UniqueName: \"kubernetes.io/projected/944f8990-0f9f-43ba-88f2-bb10ce2e95d7-kube-api-access-rx85c\") pod \"certified-operators-tz9vj\" (UID: \"944f8990-0f9f-43ba-88f2-bb10ce2e95d7\") " pod="openshift-marketplace/certified-operators-tz9vj" Jan 27 12:29:00 crc kubenswrapper[4745]: I0127 12:29:00.969582 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/944f8990-0f9f-43ba-88f2-bb10ce2e95d7-catalog-content\") pod \"certified-operators-tz9vj\" (UID: \"944f8990-0f9f-43ba-88f2-bb10ce2e95d7\") " pod="openshift-marketplace/certified-operators-tz9vj" Jan 27 12:29:00 crc kubenswrapper[4745]: I0127 12:29:00.969608 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/944f8990-0f9f-43ba-88f2-bb10ce2e95d7-utilities\") pod \"certified-operators-tz9vj\" (UID: \"944f8990-0f9f-43ba-88f2-bb10ce2e95d7\") " pod="openshift-marketplace/certified-operators-tz9vj" Jan 27 12:29:01 crc kubenswrapper[4745]: I0127 12:29:01.070624 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx85c\" (UniqueName: \"kubernetes.io/projected/944f8990-0f9f-43ba-88f2-bb10ce2e95d7-kube-api-access-rx85c\") pod \"certified-operators-tz9vj\" (UID: \"944f8990-0f9f-43ba-88f2-bb10ce2e95d7\") " pod="openshift-marketplace/certified-operators-tz9vj" Jan 27 12:29:01 crc kubenswrapper[4745]: I0127 12:29:01.070669 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/944f8990-0f9f-43ba-88f2-bb10ce2e95d7-catalog-content\") pod \"certified-operators-tz9vj\" (UID: \"944f8990-0f9f-43ba-88f2-bb10ce2e95d7\") " pod="openshift-marketplace/certified-operators-tz9vj" Jan 27 12:29:01 crc kubenswrapper[4745]: I0127 12:29:01.070695 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/944f8990-0f9f-43ba-88f2-bb10ce2e95d7-utilities\") pod \"certified-operators-tz9vj\" (UID: \"944f8990-0f9f-43ba-88f2-bb10ce2e95d7\") " pod="openshift-marketplace/certified-operators-tz9vj" Jan 27 12:29:01 crc kubenswrapper[4745]: I0127 12:29:01.071181 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/944f8990-0f9f-43ba-88f2-bb10ce2e95d7-utilities\") pod \"certified-operators-tz9vj\" (UID: \"944f8990-0f9f-43ba-88f2-bb10ce2e95d7\") " pod="openshift-marketplace/certified-operators-tz9vj" Jan 27 12:29:01 crc kubenswrapper[4745]: I0127 12:29:01.071551 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/944f8990-0f9f-43ba-88f2-bb10ce2e95d7-catalog-content\") pod \"certified-operators-tz9vj\" (UID: \"944f8990-0f9f-43ba-88f2-bb10ce2e95d7\") " pod="openshift-marketplace/certified-operators-tz9vj" Jan 27 12:29:01 crc kubenswrapper[4745]: I0127 12:29:01.091138 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx85c\" (UniqueName: \"kubernetes.io/projected/944f8990-0f9f-43ba-88f2-bb10ce2e95d7-kube-api-access-rx85c\") pod \"certified-operators-tz9vj\" (UID: \"944f8990-0f9f-43ba-88f2-bb10ce2e95d7\") " pod="openshift-marketplace/certified-operators-tz9vj" Jan 27 12:29:01 crc kubenswrapper[4745]: I0127 12:29:01.185109 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tz9vj" Jan 27 12:29:01 crc kubenswrapper[4745]: I0127 12:29:01.436612 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tz9vj"] Jan 27 12:29:01 crc kubenswrapper[4745]: W0127 12:29:01.442112 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod944f8990_0f9f_43ba_88f2_bb10ce2e95d7.slice/crio-94a8a61ffcce34082bd8f22a2f7818ac4031f5f885b9eff16e1af6552e30ddcc WatchSource:0}: Error finding container 94a8a61ffcce34082bd8f22a2f7818ac4031f5f885b9eff16e1af6552e30ddcc: Status 404 returned error can't find the container with id 94a8a61ffcce34082bd8f22a2f7818ac4031f5f885b9eff16e1af6552e30ddcc Jan 27 12:29:01 crc kubenswrapper[4745]: I0127 12:29:01.758140 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz9vj" event={"ID":"944f8990-0f9f-43ba-88f2-bb10ce2e95d7","Type":"ContainerStarted","Data":"94a8a61ffcce34082bd8f22a2f7818ac4031f5f885b9eff16e1af6552e30ddcc"} Jan 27 12:29:02 crc kubenswrapper[4745]: I0127 12:29:02.767795 4745 generic.go:334] "Generic (PLEG): container finished" podID="944f8990-0f9f-43ba-88f2-bb10ce2e95d7" containerID="2363a2dc00616c5d7f49401d33eb1c3632da7c251ecbc470008f0a1dcc403d3a" exitCode=0 Jan 27 12:29:02 crc kubenswrapper[4745]: I0127 12:29:02.768086 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz9vj" event={"ID":"944f8990-0f9f-43ba-88f2-bb10ce2e95d7","Type":"ContainerDied","Data":"2363a2dc00616c5d7f49401d33eb1c3632da7c251ecbc470008f0a1dcc403d3a"} Jan 27 12:29:04 crc kubenswrapper[4745]: I0127 12:29:04.781540 4745 generic.go:334] "Generic (PLEG): container finished" podID="944f8990-0f9f-43ba-88f2-bb10ce2e95d7" containerID="6fcfc3a8a4d8a10a62ce56c3d73592039bb5bb140f3e188265314c3ed4403537" exitCode=0 Jan 27 12:29:04 crc kubenswrapper[4745]: I0127 12:29:04.781624 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz9vj" event={"ID":"944f8990-0f9f-43ba-88f2-bb10ce2e95d7","Type":"ContainerDied","Data":"6fcfc3a8a4d8a10a62ce56c3d73592039bb5bb140f3e188265314c3ed4403537"} Jan 27 12:29:05 crc kubenswrapper[4745]: I0127 12:29:05.973843 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:29:05 crc kubenswrapper[4745]: I0127 12:29:05.973914 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:29:07 crc kubenswrapper[4745]: I0127 12:29:07.491391 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t77xs"] Jan 27 12:29:07 crc kubenswrapper[4745]: I0127 12:29:07.493428 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t77xs" Jan 27 12:29:07 crc kubenswrapper[4745]: I0127 12:29:07.507306 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t77xs"] Jan 27 12:29:07 crc kubenswrapper[4745]: I0127 12:29:07.596897 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnc9n\" (UniqueName: \"kubernetes.io/projected/dd56c13f-df8c-4107-9774-2f978d7bd61c-kube-api-access-gnc9n\") pod \"redhat-marketplace-t77xs\" (UID: \"dd56c13f-df8c-4107-9774-2f978d7bd61c\") " pod="openshift-marketplace/redhat-marketplace-t77xs" Jan 27 12:29:07 crc kubenswrapper[4745]: I0127 12:29:07.596954 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd56c13f-df8c-4107-9774-2f978d7bd61c-catalog-content\") pod \"redhat-marketplace-t77xs\" (UID: \"dd56c13f-df8c-4107-9774-2f978d7bd61c\") " pod="openshift-marketplace/redhat-marketplace-t77xs" Jan 27 12:29:07 crc kubenswrapper[4745]: I0127 12:29:07.597008 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd56c13f-df8c-4107-9774-2f978d7bd61c-utilities\") pod \"redhat-marketplace-t77xs\" (UID: \"dd56c13f-df8c-4107-9774-2f978d7bd61c\") " pod="openshift-marketplace/redhat-marketplace-t77xs" Jan 27 12:29:07 crc kubenswrapper[4745]: I0127 12:29:07.698255 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd56c13f-df8c-4107-9774-2f978d7bd61c-utilities\") pod \"redhat-marketplace-t77xs\" (UID: \"dd56c13f-df8c-4107-9774-2f978d7bd61c\") " pod="openshift-marketplace/redhat-marketplace-t77xs" Jan 27 12:29:07 crc kubenswrapper[4745]: I0127 12:29:07.698368 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnc9n\" (UniqueName: \"kubernetes.io/projected/dd56c13f-df8c-4107-9774-2f978d7bd61c-kube-api-access-gnc9n\") pod \"redhat-marketplace-t77xs\" (UID: \"dd56c13f-df8c-4107-9774-2f978d7bd61c\") " pod="openshift-marketplace/redhat-marketplace-t77xs" Jan 27 12:29:07 crc kubenswrapper[4745]: I0127 12:29:07.698393 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd56c13f-df8c-4107-9774-2f978d7bd61c-catalog-content\") pod \"redhat-marketplace-t77xs\" (UID: \"dd56c13f-df8c-4107-9774-2f978d7bd61c\") " pod="openshift-marketplace/redhat-marketplace-t77xs" Jan 27 12:29:07 crc kubenswrapper[4745]: I0127 12:29:07.698981 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd56c13f-df8c-4107-9774-2f978d7bd61c-catalog-content\") pod \"redhat-marketplace-t77xs\" (UID: \"dd56c13f-df8c-4107-9774-2f978d7bd61c\") " pod="openshift-marketplace/redhat-marketplace-t77xs" Jan 27 12:29:07 crc kubenswrapper[4745]: I0127 12:29:07.699226 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd56c13f-df8c-4107-9774-2f978d7bd61c-utilities\") pod \"redhat-marketplace-t77xs\" (UID: \"dd56c13f-df8c-4107-9774-2f978d7bd61c\") " pod="openshift-marketplace/redhat-marketplace-t77xs" Jan 27 12:29:07 crc kubenswrapper[4745]: I0127 12:29:07.726244 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnc9n\" (UniqueName: \"kubernetes.io/projected/dd56c13f-df8c-4107-9774-2f978d7bd61c-kube-api-access-gnc9n\") pod \"redhat-marketplace-t77xs\" (UID: \"dd56c13f-df8c-4107-9774-2f978d7bd61c\") " pod="openshift-marketplace/redhat-marketplace-t77xs" Jan 27 12:29:07 crc kubenswrapper[4745]: I0127 12:29:07.800974 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz9vj" event={"ID":"944f8990-0f9f-43ba-88f2-bb10ce2e95d7","Type":"ContainerStarted","Data":"278fc1c0bae21f1f591797d1080ad28a6e3f59bb6ae2f01a5f924068c9654acf"} Jan 27 12:29:07 crc kubenswrapper[4745]: I0127 12:29:07.810208 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t77xs" Jan 27 12:29:07 crc kubenswrapper[4745]: I0127 12:29:07.818237 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tz9vj" podStartSLOduration=3.947878385 podStartE2EDuration="7.818221219s" podCreationTimestamp="2026-01-27 12:29:00 +0000 UTC" firstStartedPulling="2026-01-27 12:29:02.769349111 +0000 UTC m=+1035.574259799" lastFinishedPulling="2026-01-27 12:29:06.639691945 +0000 UTC m=+1039.444602633" observedRunningTime="2026-01-27 12:29:07.816945832 +0000 UTC m=+1040.621856530" watchObservedRunningTime="2026-01-27 12:29:07.818221219 +0000 UTC m=+1040.623131907" Jan 27 12:29:08 crc kubenswrapper[4745]: I0127 12:29:08.230293 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t77xs"] Jan 27 12:29:08 crc kubenswrapper[4745]: W0127 12:29:08.240408 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd56c13f_df8c_4107_9774_2f978d7bd61c.slice/crio-d94444e73130a353695c431b18b3c512eb1d28101eabf22fc0efdf2518ac153e WatchSource:0}: Error finding container d94444e73130a353695c431b18b3c512eb1d28101eabf22fc0efdf2518ac153e: Status 404 returned error can't find the container with id d94444e73130a353695c431b18b3c512eb1d28101eabf22fc0efdf2518ac153e Jan 27 12:29:08 crc kubenswrapper[4745]: I0127 12:29:08.809368 4745 generic.go:334] "Generic (PLEG): container finished" podID="dd56c13f-df8c-4107-9774-2f978d7bd61c" containerID="07d9c97e7ef7c1bdcf1bb302db957fb9316637153f334990bef448a65378c58f" exitCode=0 Jan 27 12:29:08 crc kubenswrapper[4745]: I0127 12:29:08.809449 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t77xs" event={"ID":"dd56c13f-df8c-4107-9774-2f978d7bd61c","Type":"ContainerDied","Data":"07d9c97e7ef7c1bdcf1bb302db957fb9316637153f334990bef448a65378c58f"} Jan 27 12:29:08 crc kubenswrapper[4745]: I0127 12:29:08.809708 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t77xs" event={"ID":"dd56c13f-df8c-4107-9774-2f978d7bd61c","Type":"ContainerStarted","Data":"d94444e73130a353695c431b18b3c512eb1d28101eabf22fc0efdf2518ac153e"} Jan 27 12:29:11 crc kubenswrapper[4745]: I0127 12:29:11.186459 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tz9vj" Jan 27 12:29:11 crc kubenswrapper[4745]: I0127 12:29:11.186827 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tz9vj" Jan 27 12:29:11 crc kubenswrapper[4745]: I0127 12:29:11.225865 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tz9vj" Jan 27 12:29:11 crc kubenswrapper[4745]: I0127 12:29:11.860773 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tz9vj" Jan 27 12:29:12 crc kubenswrapper[4745]: I0127 12:29:12.877855 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tz9vj"] Jan 27 12:29:13 crc kubenswrapper[4745]: I0127 12:29:13.840128 4745 generic.go:334] "Generic (PLEG): container finished" podID="dd56c13f-df8c-4107-9774-2f978d7bd61c" containerID="3ebefa4324686ed18c5b8ac93d52d977b7f9fbb5f49126ea61667b2d39ff6c2d" exitCode=0 Jan 27 12:29:13 crc kubenswrapper[4745]: I0127 12:29:13.840206 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t77xs" event={"ID":"dd56c13f-df8c-4107-9774-2f978d7bd61c","Type":"ContainerDied","Data":"3ebefa4324686ed18c5b8ac93d52d977b7f9fbb5f49126ea61667b2d39ff6c2d"} Jan 27 12:29:13 crc kubenswrapper[4745]: I0127 12:29:13.840325 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tz9vj" podUID="944f8990-0f9f-43ba-88f2-bb10ce2e95d7" containerName="registry-server" containerID="cri-o://278fc1c0bae21f1f591797d1080ad28a6e3f59bb6ae2f01a5f924068c9654acf" gracePeriod=2 Jan 27 12:29:14 crc kubenswrapper[4745]: I0127 12:29:14.852876 4745 generic.go:334] "Generic (PLEG): container finished" podID="944f8990-0f9f-43ba-88f2-bb10ce2e95d7" containerID="278fc1c0bae21f1f591797d1080ad28a6e3f59bb6ae2f01a5f924068c9654acf" exitCode=0 Jan 27 12:29:14 crc kubenswrapper[4745]: I0127 12:29:14.853326 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz9vj" event={"ID":"944f8990-0f9f-43ba-88f2-bb10ce2e95d7","Type":"ContainerDied","Data":"278fc1c0bae21f1f591797d1080ad28a6e3f59bb6ae2f01a5f924068c9654acf"} Jan 27 12:29:14 crc kubenswrapper[4745]: I0127 12:29:14.942739 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tz9vj" Jan 27 12:29:15 crc kubenswrapper[4745]: I0127 12:29:15.100390 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/944f8990-0f9f-43ba-88f2-bb10ce2e95d7-catalog-content\") pod \"944f8990-0f9f-43ba-88f2-bb10ce2e95d7\" (UID: \"944f8990-0f9f-43ba-88f2-bb10ce2e95d7\") " Jan 27 12:29:15 crc kubenswrapper[4745]: I0127 12:29:15.100786 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/944f8990-0f9f-43ba-88f2-bb10ce2e95d7-utilities\") pod \"944f8990-0f9f-43ba-88f2-bb10ce2e95d7\" (UID: \"944f8990-0f9f-43ba-88f2-bb10ce2e95d7\") " Jan 27 12:29:15 crc kubenswrapper[4745]: I0127 12:29:15.101001 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx85c\" (UniqueName: \"kubernetes.io/projected/944f8990-0f9f-43ba-88f2-bb10ce2e95d7-kube-api-access-rx85c\") pod \"944f8990-0f9f-43ba-88f2-bb10ce2e95d7\" (UID: \"944f8990-0f9f-43ba-88f2-bb10ce2e95d7\") " Jan 27 12:29:15 crc kubenswrapper[4745]: I0127 12:29:15.106186 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/944f8990-0f9f-43ba-88f2-bb10ce2e95d7-utilities" (OuterVolumeSpecName: "utilities") pod "944f8990-0f9f-43ba-88f2-bb10ce2e95d7" (UID: "944f8990-0f9f-43ba-88f2-bb10ce2e95d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:29:15 crc kubenswrapper[4745]: I0127 12:29:15.106965 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/944f8990-0f9f-43ba-88f2-bb10ce2e95d7-kube-api-access-rx85c" (OuterVolumeSpecName: "kube-api-access-rx85c") pod "944f8990-0f9f-43ba-88f2-bb10ce2e95d7" (UID: "944f8990-0f9f-43ba-88f2-bb10ce2e95d7"). InnerVolumeSpecName "kube-api-access-rx85c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:29:15 crc kubenswrapper[4745]: I0127 12:29:15.202597 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx85c\" (UniqueName: \"kubernetes.io/projected/944f8990-0f9f-43ba-88f2-bb10ce2e95d7-kube-api-access-rx85c\") on node \"crc\" DevicePath \"\"" Jan 27 12:29:15 crc kubenswrapper[4745]: I0127 12:29:15.202748 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/944f8990-0f9f-43ba-88f2-bb10ce2e95d7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:29:15 crc kubenswrapper[4745]: I0127 12:29:15.860058 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz9vj" event={"ID":"944f8990-0f9f-43ba-88f2-bb10ce2e95d7","Type":"ContainerDied","Data":"94a8a61ffcce34082bd8f22a2f7818ac4031f5f885b9eff16e1af6552e30ddcc"} Jan 27 12:29:15 crc kubenswrapper[4745]: I0127 12:29:15.860113 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tz9vj" Jan 27 12:29:15 crc kubenswrapper[4745]: I0127 12:29:15.860132 4745 scope.go:117] "RemoveContainer" containerID="278fc1c0bae21f1f591797d1080ad28a6e3f59bb6ae2f01a5f924068c9654acf" Jan 27 12:29:15 crc kubenswrapper[4745]: I0127 12:29:15.862059 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t77xs" event={"ID":"dd56c13f-df8c-4107-9774-2f978d7bd61c","Type":"ContainerStarted","Data":"fcd6081fd15476fc134e5598fbb942af35b677c6976b8f8ef1472a902311239e"} Jan 27 12:29:15 crc kubenswrapper[4745]: I0127 12:29:15.875017 4745 scope.go:117] "RemoveContainer" containerID="6fcfc3a8a4d8a10a62ce56c3d73592039bb5bb140f3e188265314c3ed4403537" Jan 27 12:29:15 crc kubenswrapper[4745]: I0127 12:29:15.884611 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t77xs" podStartSLOduration=3.007776715 podStartE2EDuration="8.884590673s" podCreationTimestamp="2026-01-27 12:29:07 +0000 UTC" firstStartedPulling="2026-01-27 12:29:08.812476327 +0000 UTC m=+1041.617387015" lastFinishedPulling="2026-01-27 12:29:14.689290275 +0000 UTC m=+1047.494200973" observedRunningTime="2026-01-27 12:29:15.881911285 +0000 UTC m=+1048.686821973" watchObservedRunningTime="2026-01-27 12:29:15.884590673 +0000 UTC m=+1048.689501361" Jan 27 12:29:15 crc kubenswrapper[4745]: I0127 12:29:15.890158 4745 scope.go:117] "RemoveContainer" containerID="2363a2dc00616c5d7f49401d33eb1c3632da7c251ecbc470008f0a1dcc403d3a" Jan 27 12:29:16 crc kubenswrapper[4745]: I0127 12:29:16.285499 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/944f8990-0f9f-43ba-88f2-bb10ce2e95d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "944f8990-0f9f-43ba-88f2-bb10ce2e95d7" (UID: "944f8990-0f9f-43ba-88f2-bb10ce2e95d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:29:16 crc kubenswrapper[4745]: I0127 12:29:16.317808 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/944f8990-0f9f-43ba-88f2-bb10ce2e95d7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:29:16 crc kubenswrapper[4745]: I0127 12:29:16.485398 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tz9vj"] Jan 27 12:29:16 crc kubenswrapper[4745]: I0127 12:29:16.490848 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tz9vj"] Jan 27 12:29:17 crc kubenswrapper[4745]: I0127 12:29:17.810421 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t77xs" Jan 27 12:29:17 crc kubenswrapper[4745]: I0127 12:29:17.811127 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t77xs" Jan 27 12:29:17 crc kubenswrapper[4745]: I0127 12:29:17.858325 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t77xs" Jan 27 12:29:18 crc kubenswrapper[4745]: I0127 12:29:18.083094 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="944f8990-0f9f-43ba-88f2-bb10ce2e95d7" path="/var/lib/kubelet/pods/944f8990-0f9f-43ba-88f2-bb10ce2e95d7/volumes" Jan 27 12:29:19 crc kubenswrapper[4745]: I0127 12:29:19.524792 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-69c98499d8-74brb" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.323301 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-6tfdz"] Jan 27 12:29:20 crc kubenswrapper[4745]: E0127 12:29:20.323627 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944f8990-0f9f-43ba-88f2-bb10ce2e95d7" containerName="extract-utilities" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.323646 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="944f8990-0f9f-43ba-88f2-bb10ce2e95d7" containerName="extract-utilities" Jan 27 12:29:20 crc kubenswrapper[4745]: E0127 12:29:20.323665 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944f8990-0f9f-43ba-88f2-bb10ce2e95d7" containerName="extract-content" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.323672 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="944f8990-0f9f-43ba-88f2-bb10ce2e95d7" containerName="extract-content" Jan 27 12:29:20 crc kubenswrapper[4745]: E0127 12:29:20.323683 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944f8990-0f9f-43ba-88f2-bb10ce2e95d7" containerName="registry-server" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.323690 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="944f8990-0f9f-43ba-88f2-bb10ce2e95d7" containerName="registry-server" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.323830 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="944f8990-0f9f-43ba-88f2-bb10ce2e95d7" containerName="registry-server" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.326558 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.328556 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-q97lc"] Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.329440 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q97lc" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.330497 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-bcmtq" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.331084 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.331242 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.334732 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.350895 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-q97lc"] Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.415299 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-nslkl"] Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.416501 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nslkl" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.418192 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-jzxwg" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.419383 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.419414 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.420831 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.442006 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-djdl6"] Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.443146 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-djdl6" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.445590 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.473006 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-djdl6"] Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.476244 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9a95157d-c182-4ccc-a603-e314f81ac762-metrics\") pod \"frr-k8s-6tfdz\" (UID: \"9a95157d-c182-4ccc-a603-e314f81ac762\") " pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.476289 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a95157d-c182-4ccc-a603-e314f81ac762-metrics-certs\") pod \"frr-k8s-6tfdz\" (UID: \"9a95157d-c182-4ccc-a603-e314f81ac762\") " pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.476322 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9a95157d-c182-4ccc-a603-e314f81ac762-frr-sockets\") pod \"frr-k8s-6tfdz\" (UID: \"9a95157d-c182-4ccc-a603-e314f81ac762\") " pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.476359 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnq88\" (UniqueName: \"kubernetes.io/projected/fa5b5f13-a8ba-490e-97e4-3383c24a13c4-kube-api-access-tnq88\") pod \"frr-k8s-webhook-server-7df86c4f6c-q97lc\" (UID: \"fa5b5f13-a8ba-490e-97e4-3383c24a13c4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q97lc" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.476385 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa5b5f13-a8ba-490e-97e4-3383c24a13c4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-q97lc\" (UID: \"fa5b5f13-a8ba-490e-97e4-3383c24a13c4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q97lc" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.476429 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9a95157d-c182-4ccc-a603-e314f81ac762-frr-conf\") pod \"frr-k8s-6tfdz\" (UID: \"9a95157d-c182-4ccc-a603-e314f81ac762\") " pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.476460 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjpts\" (UniqueName: \"kubernetes.io/projected/9a95157d-c182-4ccc-a603-e314f81ac762-kube-api-access-wjpts\") pod \"frr-k8s-6tfdz\" (UID: \"9a95157d-c182-4ccc-a603-e314f81ac762\") " pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.476509 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9a95157d-c182-4ccc-a603-e314f81ac762-frr-startup\") pod \"frr-k8s-6tfdz\" (UID: \"9a95157d-c182-4ccc-a603-e314f81ac762\") " pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.476550 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9a95157d-c182-4ccc-a603-e314f81ac762-reloader\") pod \"frr-k8s-6tfdz\" (UID: \"9a95157d-c182-4ccc-a603-e314f81ac762\") " pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.578135 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9a95157d-c182-4ccc-a603-e314f81ac762-reloader\") pod \"frr-k8s-6tfdz\" (UID: \"9a95157d-c182-4ccc-a603-e314f81ac762\") " pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.578462 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9a95157d-c182-4ccc-a603-e314f81ac762-reloader\") pod \"frr-k8s-6tfdz\" (UID: \"9a95157d-c182-4ccc-a603-e314f81ac762\") " pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.578514 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcp8s\" (UniqueName: \"kubernetes.io/projected/ecb4dc8b-c615-4fe2-819f-4c799f639d3f-kube-api-access-fcp8s\") pod \"controller-6968d8fdc4-djdl6\" (UID: \"ecb4dc8b-c615-4fe2-819f-4c799f639d3f\") " pod="metallb-system/controller-6968d8fdc4-djdl6" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.578540 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ecb4dc8b-c615-4fe2-819f-4c799f639d3f-cert\") pod \"controller-6968d8fdc4-djdl6\" (UID: \"ecb4dc8b-c615-4fe2-819f-4c799f639d3f\") " pod="metallb-system/controller-6968d8fdc4-djdl6" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.578592 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/802102ed-a580-4495-9855-d86f54160441-memberlist\") pod \"speaker-nslkl\" (UID: \"802102ed-a580-4495-9855-d86f54160441\") " pod="metallb-system/speaker-nslkl" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.578613 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9a95157d-c182-4ccc-a603-e314f81ac762-metrics\") pod \"frr-k8s-6tfdz\" (UID: \"9a95157d-c182-4ccc-a603-e314f81ac762\") " pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.578627 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a95157d-c182-4ccc-a603-e314f81ac762-metrics-certs\") pod \"frr-k8s-6tfdz\" (UID: \"9a95157d-c182-4ccc-a603-e314f81ac762\") " pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.578710 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9a95157d-c182-4ccc-a603-e314f81ac762-frr-sockets\") pod \"frr-k8s-6tfdz\" (UID: \"9a95157d-c182-4ccc-a603-e314f81ac762\") " pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.578755 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/802102ed-a580-4495-9855-d86f54160441-metallb-excludel2\") pod \"speaker-nslkl\" (UID: \"802102ed-a580-4495-9855-d86f54160441\") " pod="metallb-system/speaker-nslkl" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.578796 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9a95157d-c182-4ccc-a603-e314f81ac762-metrics\") pod \"frr-k8s-6tfdz\" (UID: \"9a95157d-c182-4ccc-a603-e314f81ac762\") " pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.578819 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnq88\" (UniqueName: \"kubernetes.io/projected/fa5b5f13-a8ba-490e-97e4-3383c24a13c4-kube-api-access-tnq88\") pod \"frr-k8s-webhook-server-7df86c4f6c-q97lc\" (UID: \"fa5b5f13-a8ba-490e-97e4-3383c24a13c4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q97lc" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.579110 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa5b5f13-a8ba-490e-97e4-3383c24a13c4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-q97lc\" (UID: \"fa5b5f13-a8ba-490e-97e4-3383c24a13c4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q97lc" Jan 27 12:29:20 crc kubenswrapper[4745]: E0127 12:29:20.579190 4745 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.579230 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkhbn\" (UniqueName: \"kubernetes.io/projected/802102ed-a580-4495-9855-d86f54160441-kube-api-access-tkhbn\") pod \"speaker-nslkl\" (UID: \"802102ed-a580-4495-9855-d86f54160441\") " pod="metallb-system/speaker-nslkl" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.579254 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9a95157d-c182-4ccc-a603-e314f81ac762-frr-conf\") pod \"frr-k8s-6tfdz\" (UID: \"9a95157d-c182-4ccc-a603-e314f81ac762\") " pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: E0127 12:29:20.579297 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa5b5f13-a8ba-490e-97e4-3383c24a13c4-cert podName:fa5b5f13-a8ba-490e-97e4-3383c24a13c4 nodeName:}" failed. No retries permitted until 2026-01-27 12:29:21.079274672 +0000 UTC m=+1053.884185360 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fa5b5f13-a8ba-490e-97e4-3383c24a13c4-cert") pod "frr-k8s-webhook-server-7df86c4f6c-q97lc" (UID: "fa5b5f13-a8ba-490e-97e4-3383c24a13c4") : secret "frr-k8s-webhook-server-cert" not found Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.579318 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjpts\" (UniqueName: \"kubernetes.io/projected/9a95157d-c182-4ccc-a603-e314f81ac762-kube-api-access-wjpts\") pod \"frr-k8s-6tfdz\" (UID: \"9a95157d-c182-4ccc-a603-e314f81ac762\") " pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.579339 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/802102ed-a580-4495-9855-d86f54160441-metrics-certs\") pod \"speaker-nslkl\" (UID: \"802102ed-a580-4495-9855-d86f54160441\") " pod="metallb-system/speaker-nslkl" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.579358 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ecb4dc8b-c615-4fe2-819f-4c799f639d3f-metrics-certs\") pod \"controller-6968d8fdc4-djdl6\" (UID: \"ecb4dc8b-c615-4fe2-819f-4c799f639d3f\") " pod="metallb-system/controller-6968d8fdc4-djdl6" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.579451 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9a95157d-c182-4ccc-a603-e314f81ac762-frr-startup\") pod \"frr-k8s-6tfdz\" (UID: \"9a95157d-c182-4ccc-a603-e314f81ac762\") " pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.579516 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9a95157d-c182-4ccc-a603-e314f81ac762-frr-conf\") pod \"frr-k8s-6tfdz\" (UID: \"9a95157d-c182-4ccc-a603-e314f81ac762\") " pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.579598 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9a95157d-c182-4ccc-a603-e314f81ac762-frr-sockets\") pod \"frr-k8s-6tfdz\" (UID: \"9a95157d-c182-4ccc-a603-e314f81ac762\") " pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.580201 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9a95157d-c182-4ccc-a603-e314f81ac762-frr-startup\") pod \"frr-k8s-6tfdz\" (UID: \"9a95157d-c182-4ccc-a603-e314f81ac762\") " pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.584950 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a95157d-c182-4ccc-a603-e314f81ac762-metrics-certs\") pod \"frr-k8s-6tfdz\" (UID: \"9a95157d-c182-4ccc-a603-e314f81ac762\") " pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.598305 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnq88\" (UniqueName: \"kubernetes.io/projected/fa5b5f13-a8ba-490e-97e4-3383c24a13c4-kube-api-access-tnq88\") pod \"frr-k8s-webhook-server-7df86c4f6c-q97lc\" (UID: \"fa5b5f13-a8ba-490e-97e4-3383c24a13c4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q97lc" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.604476 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjpts\" (UniqueName: \"kubernetes.io/projected/9a95157d-c182-4ccc-a603-e314f81ac762-kube-api-access-wjpts\") pod \"frr-k8s-6tfdz\" (UID: \"9a95157d-c182-4ccc-a603-e314f81ac762\") " pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.641930 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.680631 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkhbn\" (UniqueName: \"kubernetes.io/projected/802102ed-a580-4495-9855-d86f54160441-kube-api-access-tkhbn\") pod \"speaker-nslkl\" (UID: \"802102ed-a580-4495-9855-d86f54160441\") " pod="metallb-system/speaker-nslkl" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.680683 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ecb4dc8b-c615-4fe2-819f-4c799f639d3f-metrics-certs\") pod \"controller-6968d8fdc4-djdl6\" (UID: \"ecb4dc8b-c615-4fe2-819f-4c799f639d3f\") " pod="metallb-system/controller-6968d8fdc4-djdl6" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.680698 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/802102ed-a580-4495-9855-d86f54160441-metrics-certs\") pod \"speaker-nslkl\" (UID: \"802102ed-a580-4495-9855-d86f54160441\") " pod="metallb-system/speaker-nslkl" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.680761 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcp8s\" (UniqueName: \"kubernetes.io/projected/ecb4dc8b-c615-4fe2-819f-4c799f639d3f-kube-api-access-fcp8s\") pod \"controller-6968d8fdc4-djdl6\" (UID: \"ecb4dc8b-c615-4fe2-819f-4c799f639d3f\") " pod="metallb-system/controller-6968d8fdc4-djdl6" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.680790 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ecb4dc8b-c615-4fe2-819f-4c799f639d3f-cert\") pod \"controller-6968d8fdc4-djdl6\" (UID: \"ecb4dc8b-c615-4fe2-819f-4c799f639d3f\") " pod="metallb-system/controller-6968d8fdc4-djdl6" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.680829 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/802102ed-a580-4495-9855-d86f54160441-memberlist\") pod \"speaker-nslkl\" (UID: \"802102ed-a580-4495-9855-d86f54160441\") " pod="metallb-system/speaker-nslkl" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.680860 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/802102ed-a580-4495-9855-d86f54160441-metallb-excludel2\") pod \"speaker-nslkl\" (UID: \"802102ed-a580-4495-9855-d86f54160441\") " pod="metallb-system/speaker-nslkl" Jan 27 12:29:20 crc kubenswrapper[4745]: E0127 12:29:20.681020 4745 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 12:29:20 crc kubenswrapper[4745]: E0127 12:29:20.681096 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/802102ed-a580-4495-9855-d86f54160441-memberlist podName:802102ed-a580-4495-9855-d86f54160441 nodeName:}" failed. No retries permitted until 2026-01-27 12:29:21.181074089 +0000 UTC m=+1053.985984777 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/802102ed-a580-4495-9855-d86f54160441-memberlist") pod "speaker-nslkl" (UID: "802102ed-a580-4495-9855-d86f54160441") : secret "metallb-memberlist" not found Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.681690 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/802102ed-a580-4495-9855-d86f54160441-metallb-excludel2\") pod \"speaker-nslkl\" (UID: \"802102ed-a580-4495-9855-d86f54160441\") " pod="metallb-system/speaker-nslkl" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.682795 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.684882 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ecb4dc8b-c615-4fe2-819f-4c799f639d3f-metrics-certs\") pod \"controller-6968d8fdc4-djdl6\" (UID: \"ecb4dc8b-c615-4fe2-819f-4c799f639d3f\") " pod="metallb-system/controller-6968d8fdc4-djdl6" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.685935 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/802102ed-a580-4495-9855-d86f54160441-metrics-certs\") pod \"speaker-nslkl\" (UID: \"802102ed-a580-4495-9855-d86f54160441\") " pod="metallb-system/speaker-nslkl" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.697281 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ecb4dc8b-c615-4fe2-819f-4c799f639d3f-cert\") pod \"controller-6968d8fdc4-djdl6\" (UID: \"ecb4dc8b-c615-4fe2-819f-4c799f639d3f\") " pod="metallb-system/controller-6968d8fdc4-djdl6" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.698587 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcp8s\" (UniqueName: \"kubernetes.io/projected/ecb4dc8b-c615-4fe2-819f-4c799f639d3f-kube-api-access-fcp8s\") pod \"controller-6968d8fdc4-djdl6\" (UID: \"ecb4dc8b-c615-4fe2-819f-4c799f639d3f\") " pod="metallb-system/controller-6968d8fdc4-djdl6" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.703946 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkhbn\" (UniqueName: \"kubernetes.io/projected/802102ed-a580-4495-9855-d86f54160441-kube-api-access-tkhbn\") pod \"speaker-nslkl\" (UID: \"802102ed-a580-4495-9855-d86f54160441\") " pod="metallb-system/speaker-nslkl" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.776620 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-djdl6" Jan 27 12:29:20 crc kubenswrapper[4745]: I0127 12:29:20.905394 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6tfdz" event={"ID":"9a95157d-c182-4ccc-a603-e314f81ac762","Type":"ContainerStarted","Data":"78975245ffc9deeec801fe0dab200dfdd5a892cb2862f972f5ce29900b34b1f6"} Jan 27 12:29:21 crc kubenswrapper[4745]: I0127 12:29:21.085567 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa5b5f13-a8ba-490e-97e4-3383c24a13c4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-q97lc\" (UID: \"fa5b5f13-a8ba-490e-97e4-3383c24a13c4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q97lc" Jan 27 12:29:21 crc kubenswrapper[4745]: I0127 12:29:21.089544 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa5b5f13-a8ba-490e-97e4-3383c24a13c4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-q97lc\" (UID: \"fa5b5f13-a8ba-490e-97e4-3383c24a13c4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q97lc" Jan 27 12:29:21 crc kubenswrapper[4745]: I0127 12:29:21.183090 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-djdl6"] Jan 27 12:29:21 crc kubenswrapper[4745]: I0127 12:29:21.187024 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/802102ed-a580-4495-9855-d86f54160441-memberlist\") pod \"speaker-nslkl\" (UID: \"802102ed-a580-4495-9855-d86f54160441\") " pod="metallb-system/speaker-nslkl" Jan 27 12:29:21 crc kubenswrapper[4745]: E0127 12:29:21.188215 4745 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 12:29:21 crc kubenswrapper[4745]: E0127 12:29:21.188266 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/802102ed-a580-4495-9855-d86f54160441-memberlist podName:802102ed-a580-4495-9855-d86f54160441 nodeName:}" failed. No retries permitted until 2026-01-27 12:29:22.188251393 +0000 UTC m=+1054.993162081 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/802102ed-a580-4495-9855-d86f54160441-memberlist") pod "speaker-nslkl" (UID: "802102ed-a580-4495-9855-d86f54160441") : secret "metallb-memberlist" not found Jan 27 12:29:21 crc kubenswrapper[4745]: I0127 12:29:21.250116 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q97lc" Jan 27 12:29:21 crc kubenswrapper[4745]: I0127 12:29:21.706909 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-q97lc"] Jan 27 12:29:21 crc kubenswrapper[4745]: W0127 12:29:21.714712 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa5b5f13_a8ba_490e_97e4_3383c24a13c4.slice/crio-e2f570033c6460007b0fbba75d13590620c13e2626272f4fe32fef2c03848e46 WatchSource:0}: Error finding container e2f570033c6460007b0fbba75d13590620c13e2626272f4fe32fef2c03848e46: Status 404 returned error can't find the container with id e2f570033c6460007b0fbba75d13590620c13e2626272f4fe32fef2c03848e46 Jan 27 12:29:21 crc kubenswrapper[4745]: I0127 12:29:21.922406 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-djdl6" event={"ID":"ecb4dc8b-c615-4fe2-819f-4c799f639d3f","Type":"ContainerStarted","Data":"7eae21979dd243aa07011936d2f0c44fffd4cc904ac2de5268f9acca7b59ecc2"} Jan 27 12:29:21 crc kubenswrapper[4745]: I0127 12:29:21.922763 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-djdl6" event={"ID":"ecb4dc8b-c615-4fe2-819f-4c799f639d3f","Type":"ContainerStarted","Data":"a35eaa7b5875999e22f72740131d9bd36a13873807a6f96e84f81dc182a774b8"} Jan 27 12:29:21 crc kubenswrapper[4745]: I0127 12:29:21.922776 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-djdl6" event={"ID":"ecb4dc8b-c615-4fe2-819f-4c799f639d3f","Type":"ContainerStarted","Data":"f1dfb8897b67fe4d1dc2a70bde50dc3f4f8c90a13ee221fb7a0bafa7e5a6bdae"} Jan 27 12:29:21 crc kubenswrapper[4745]: I0127 12:29:21.922793 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-djdl6" Jan 27 12:29:21 crc kubenswrapper[4745]: I0127 12:29:21.923938 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q97lc" event={"ID":"fa5b5f13-a8ba-490e-97e4-3383c24a13c4","Type":"ContainerStarted","Data":"e2f570033c6460007b0fbba75d13590620c13e2626272f4fe32fef2c03848e46"} Jan 27 12:29:21 crc kubenswrapper[4745]: I0127 12:29:21.943561 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-djdl6" podStartSLOduration=1.9435367449999998 podStartE2EDuration="1.943536745s" podCreationTimestamp="2026-01-27 12:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:29:21.938604773 +0000 UTC m=+1054.743515461" watchObservedRunningTime="2026-01-27 12:29:21.943536745 +0000 UTC m=+1054.748447433" Jan 27 12:29:22 crc kubenswrapper[4745]: I0127 12:29:22.201246 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/802102ed-a580-4495-9855-d86f54160441-memberlist\") pod \"speaker-nslkl\" (UID: \"802102ed-a580-4495-9855-d86f54160441\") " pod="metallb-system/speaker-nslkl" Jan 27 12:29:22 crc kubenswrapper[4745]: I0127 12:29:22.206789 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/802102ed-a580-4495-9855-d86f54160441-memberlist\") pod \"speaker-nslkl\" (UID: \"802102ed-a580-4495-9855-d86f54160441\") " pod="metallb-system/speaker-nslkl" Jan 27 12:29:22 crc kubenswrapper[4745]: I0127 12:29:22.235727 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nslkl" Jan 27 12:29:22 crc kubenswrapper[4745]: W0127 12:29:22.256991 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod802102ed_a580_4495_9855_d86f54160441.slice/crio-ea2014bc2eeffd181410808bf4b9d44e54ec0682bdfd0dd8b083cac09b4880c9 WatchSource:0}: Error finding container ea2014bc2eeffd181410808bf4b9d44e54ec0682bdfd0dd8b083cac09b4880c9: Status 404 returned error can't find the container with id ea2014bc2eeffd181410808bf4b9d44e54ec0682bdfd0dd8b083cac09b4880c9 Jan 27 12:29:22 crc kubenswrapper[4745]: I0127 12:29:22.937089 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nslkl" event={"ID":"802102ed-a580-4495-9855-d86f54160441","Type":"ContainerStarted","Data":"86b53489a0918f0130c6bfff4781c6e1d44eed4af690ea81e57487b2d92a3c21"} Jan 27 12:29:22 crc kubenswrapper[4745]: I0127 12:29:22.937383 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nslkl" event={"ID":"802102ed-a580-4495-9855-d86f54160441","Type":"ContainerStarted","Data":"20fa23b36b867287317ac80f988431cfa7425ce2feb9a51330ad9b7d9a140bc6"} Jan 27 12:29:22 crc kubenswrapper[4745]: I0127 12:29:22.937395 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nslkl" event={"ID":"802102ed-a580-4495-9855-d86f54160441","Type":"ContainerStarted","Data":"ea2014bc2eeffd181410808bf4b9d44e54ec0682bdfd0dd8b083cac09b4880c9"} Jan 27 12:29:23 crc kubenswrapper[4745]: I0127 12:29:23.945662 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-nslkl" Jan 27 12:29:23 crc kubenswrapper[4745]: I0127 12:29:23.971768 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-nslkl" podStartSLOduration=3.971752566 podStartE2EDuration="3.971752566s" podCreationTimestamp="2026-01-27 12:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:29:23.970203431 +0000 UTC m=+1056.775115849" watchObservedRunningTime="2026-01-27 12:29:23.971752566 +0000 UTC m=+1056.776663254" Jan 27 12:29:27 crc kubenswrapper[4745]: I0127 12:29:27.995285 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t77xs" Jan 27 12:29:28 crc kubenswrapper[4745]: I0127 12:29:28.042622 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t77xs"] Jan 27 12:29:28 crc kubenswrapper[4745]: I0127 12:29:28.096281 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t77xs" podUID="dd56c13f-df8c-4107-9774-2f978d7bd61c" containerName="registry-server" containerID="cri-o://fcd6081fd15476fc134e5598fbb942af35b677c6976b8f8ef1472a902311239e" gracePeriod=2 Jan 27 12:29:29 crc kubenswrapper[4745]: I0127 12:29:29.103828 4745 generic.go:334] "Generic (PLEG): container finished" podID="dd56c13f-df8c-4107-9774-2f978d7bd61c" containerID="fcd6081fd15476fc134e5598fbb942af35b677c6976b8f8ef1472a902311239e" exitCode=0 Jan 27 12:29:29 crc kubenswrapper[4745]: I0127 12:29:29.104119 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t77xs" event={"ID":"dd56c13f-df8c-4107-9774-2f978d7bd61c","Type":"ContainerDied","Data":"fcd6081fd15476fc134e5598fbb942af35b677c6976b8f8ef1472a902311239e"} Jan 27 12:29:31 crc kubenswrapper[4745]: I0127 12:29:31.709374 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t77xs" Jan 27 12:29:31 crc kubenswrapper[4745]: I0127 12:29:31.854328 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd56c13f-df8c-4107-9774-2f978d7bd61c-utilities\") pod \"dd56c13f-df8c-4107-9774-2f978d7bd61c\" (UID: \"dd56c13f-df8c-4107-9774-2f978d7bd61c\") " Jan 27 12:29:31 crc kubenswrapper[4745]: I0127 12:29:31.854420 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnc9n\" (UniqueName: \"kubernetes.io/projected/dd56c13f-df8c-4107-9774-2f978d7bd61c-kube-api-access-gnc9n\") pod \"dd56c13f-df8c-4107-9774-2f978d7bd61c\" (UID: \"dd56c13f-df8c-4107-9774-2f978d7bd61c\") " Jan 27 12:29:31 crc kubenswrapper[4745]: I0127 12:29:31.854494 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd56c13f-df8c-4107-9774-2f978d7bd61c-catalog-content\") pod \"dd56c13f-df8c-4107-9774-2f978d7bd61c\" (UID: \"dd56c13f-df8c-4107-9774-2f978d7bd61c\") " Jan 27 12:29:31 crc kubenswrapper[4745]: I0127 12:29:31.855771 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd56c13f-df8c-4107-9774-2f978d7bd61c-utilities" (OuterVolumeSpecName: "utilities") pod "dd56c13f-df8c-4107-9774-2f978d7bd61c" (UID: "dd56c13f-df8c-4107-9774-2f978d7bd61c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:29:31 crc kubenswrapper[4745]: I0127 12:29:31.862101 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd56c13f-df8c-4107-9774-2f978d7bd61c-kube-api-access-gnc9n" (OuterVolumeSpecName: "kube-api-access-gnc9n") pod "dd56c13f-df8c-4107-9774-2f978d7bd61c" (UID: "dd56c13f-df8c-4107-9774-2f978d7bd61c"). InnerVolumeSpecName "kube-api-access-gnc9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:29:31 crc kubenswrapper[4745]: I0127 12:29:31.876624 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd56c13f-df8c-4107-9774-2f978d7bd61c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd56c13f-df8c-4107-9774-2f978d7bd61c" (UID: "dd56c13f-df8c-4107-9774-2f978d7bd61c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:29:31 crc kubenswrapper[4745]: I0127 12:29:31.956100 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnc9n\" (UniqueName: \"kubernetes.io/projected/dd56c13f-df8c-4107-9774-2f978d7bd61c-kube-api-access-gnc9n\") on node \"crc\" DevicePath \"\"" Jan 27 12:29:31 crc kubenswrapper[4745]: I0127 12:29:31.956134 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd56c13f-df8c-4107-9774-2f978d7bd61c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:29:31 crc kubenswrapper[4745]: I0127 12:29:31.956145 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd56c13f-df8c-4107-9774-2f978d7bd61c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:29:32 crc kubenswrapper[4745]: I0127 12:29:32.123655 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t77xs" event={"ID":"dd56c13f-df8c-4107-9774-2f978d7bd61c","Type":"ContainerDied","Data":"d94444e73130a353695c431b18b3c512eb1d28101eabf22fc0efdf2518ac153e"} Jan 27 12:29:32 crc kubenswrapper[4745]: I0127 12:29:32.123744 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t77xs" Jan 27 12:29:32 crc kubenswrapper[4745]: I0127 12:29:32.124275 4745 scope.go:117] "RemoveContainer" containerID="fcd6081fd15476fc134e5598fbb942af35b677c6976b8f8ef1472a902311239e" Jan 27 12:29:32 crc kubenswrapper[4745]: I0127 12:29:32.150288 4745 scope.go:117] "RemoveContainer" containerID="3ebefa4324686ed18c5b8ac93d52d977b7f9fbb5f49126ea61667b2d39ff6c2d" Jan 27 12:29:32 crc kubenswrapper[4745]: I0127 12:29:32.166466 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t77xs"] Jan 27 12:29:32 crc kubenswrapper[4745]: I0127 12:29:32.170911 4745 scope.go:117] "RemoveContainer" containerID="07d9c97e7ef7c1bdcf1bb302db957fb9316637153f334990bef448a65378c58f" Jan 27 12:29:32 crc kubenswrapper[4745]: I0127 12:29:32.174389 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t77xs"] Jan 27 12:29:32 crc kubenswrapper[4745]: I0127 12:29:32.240068 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-nslkl" Jan 27 12:29:34 crc kubenswrapper[4745]: I0127 12:29:34.081698 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd56c13f-df8c-4107-9774-2f978d7bd61c" path="/var/lib/kubelet/pods/dd56c13f-df8c-4107-9774-2f978d7bd61c/volumes" Jan 27 12:29:35 crc kubenswrapper[4745]: I0127 12:29:35.143457 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6tfdz" event={"ID":"9a95157d-c182-4ccc-a603-e314f81ac762","Type":"ContainerStarted","Data":"79d10a702d0c1f39e05e473b39917fb095f12c703b97a5b1cff95500b729e801"} Jan 27 12:29:35 crc kubenswrapper[4745]: I0127 12:29:35.145156 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q97lc" event={"ID":"fa5b5f13-a8ba-490e-97e4-3383c24a13c4","Type":"ContainerStarted","Data":"e51510b205c0cb32ef42f480dd846e7e97fd822546ed9efd3552586dc75efa1a"} Jan 27 12:29:35 crc kubenswrapper[4745]: I0127 12:29:35.751478 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8qkvx"] Jan 27 12:29:35 crc kubenswrapper[4745]: E0127 12:29:35.751716 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd56c13f-df8c-4107-9774-2f978d7bd61c" containerName="extract-utilities" Jan 27 12:29:35 crc kubenswrapper[4745]: I0127 12:29:35.751732 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd56c13f-df8c-4107-9774-2f978d7bd61c" containerName="extract-utilities" Jan 27 12:29:35 crc kubenswrapper[4745]: E0127 12:29:35.751747 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd56c13f-df8c-4107-9774-2f978d7bd61c" containerName="registry-server" Jan 27 12:29:35 crc kubenswrapper[4745]: I0127 12:29:35.751755 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd56c13f-df8c-4107-9774-2f978d7bd61c" containerName="registry-server" Jan 27 12:29:35 crc kubenswrapper[4745]: E0127 12:29:35.751768 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd56c13f-df8c-4107-9774-2f978d7bd61c" containerName="extract-content" Jan 27 12:29:35 crc kubenswrapper[4745]: I0127 12:29:35.751773 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd56c13f-df8c-4107-9774-2f978d7bd61c" containerName="extract-content" Jan 27 12:29:35 crc kubenswrapper[4745]: I0127 12:29:35.751910 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd56c13f-df8c-4107-9774-2f978d7bd61c" containerName="registry-server" Jan 27 12:29:35 crc kubenswrapper[4745]: I0127 12:29:35.752408 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8qkvx" Jan 27 12:29:35 crc kubenswrapper[4745]: I0127 12:29:35.755553 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-fggld" Jan 27 12:29:35 crc kubenswrapper[4745]: I0127 12:29:35.755708 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 12:29:35 crc kubenswrapper[4745]: I0127 12:29:35.766854 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 12:29:35 crc kubenswrapper[4745]: I0127 12:29:35.775394 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8qkvx"] Jan 27 12:29:35 crc kubenswrapper[4745]: I0127 12:29:35.910113 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pqtr\" (UniqueName: \"kubernetes.io/projected/d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e-kube-api-access-4pqtr\") pod \"openstack-operator-index-8qkvx\" (UID: \"d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e\") " pod="openstack-operators/openstack-operator-index-8qkvx" Jan 27 12:29:35 crc kubenswrapper[4745]: I0127 12:29:35.967563 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:29:35 crc kubenswrapper[4745]: I0127 12:29:35.967640 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:29:36 crc kubenswrapper[4745]: I0127 12:29:36.011731 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pqtr\" (UniqueName: \"kubernetes.io/projected/d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e-kube-api-access-4pqtr\") pod \"openstack-operator-index-8qkvx\" (UID: \"d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e\") " pod="openstack-operators/openstack-operator-index-8qkvx" Jan 27 12:29:36 crc kubenswrapper[4745]: I0127 12:29:36.033636 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pqtr\" (UniqueName: \"kubernetes.io/projected/d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e-kube-api-access-4pqtr\") pod \"openstack-operator-index-8qkvx\" (UID: \"d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e\") " pod="openstack-operators/openstack-operator-index-8qkvx" Jan 27 12:29:36 crc kubenswrapper[4745]: I0127 12:29:36.066647 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8qkvx" Jan 27 12:29:36 crc kubenswrapper[4745]: I0127 12:29:36.156052 4745 generic.go:334] "Generic (PLEG): container finished" podID="9a95157d-c182-4ccc-a603-e314f81ac762" containerID="79d10a702d0c1f39e05e473b39917fb095f12c703b97a5b1cff95500b729e801" exitCode=0 Jan 27 12:29:36 crc kubenswrapper[4745]: I0127 12:29:36.156150 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6tfdz" event={"ID":"9a95157d-c182-4ccc-a603-e314f81ac762","Type":"ContainerDied","Data":"79d10a702d0c1f39e05e473b39917fb095f12c703b97a5b1cff95500b729e801"} Jan 27 12:29:36 crc kubenswrapper[4745]: I0127 12:29:36.156318 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q97lc" Jan 27 12:29:36 crc kubenswrapper[4745]: I0127 12:29:36.227382 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q97lc" podStartSLOduration=2.992970947 podStartE2EDuration="16.227365655s" podCreationTimestamp="2026-01-27 12:29:20 +0000 UTC" firstStartedPulling="2026-01-27 12:29:21.717499154 +0000 UTC m=+1054.522409832" lastFinishedPulling="2026-01-27 12:29:34.951893852 +0000 UTC m=+1067.756804540" observedRunningTime="2026-01-27 12:29:36.191690305 +0000 UTC m=+1068.996600993" watchObservedRunningTime="2026-01-27 12:29:36.227365655 +0000 UTC m=+1069.032276343" Jan 27 12:29:36 crc kubenswrapper[4745]: I0127 12:29:36.361212 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8qkvx"] Jan 27 12:29:37 crc kubenswrapper[4745]: I0127 12:29:37.164518 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8qkvx" event={"ID":"d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e","Type":"ContainerStarted","Data":"fb89b5c5452eac9d7dc79e89d5b50d1558a837e4aad7c0aa54b2aadb26c10657"} Jan 27 12:29:40 crc kubenswrapper[4745]: I0127 12:29:40.343516 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-8qkvx"] Jan 27 12:29:40 crc kubenswrapper[4745]: I0127 12:29:40.782507 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-djdl6" Jan 27 12:29:40 crc kubenswrapper[4745]: I0127 12:29:40.948140 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-g74wc"] Jan 27 12:29:40 crc kubenswrapper[4745]: I0127 12:29:40.948930 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g74wc" Jan 27 12:29:40 crc kubenswrapper[4745]: I0127 12:29:40.960365 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g74wc"] Jan 27 12:29:41 crc kubenswrapper[4745]: I0127 12:29:41.086624 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slmcd\" (UniqueName: \"kubernetes.io/projected/f9c63e1a-3bc5-4367-8fba-b4c574ba5592-kube-api-access-slmcd\") pod \"openstack-operator-index-g74wc\" (UID: \"f9c63e1a-3bc5-4367-8fba-b4c574ba5592\") " pod="openstack-operators/openstack-operator-index-g74wc" Jan 27 12:29:41 crc kubenswrapper[4745]: I0127 12:29:41.188500 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slmcd\" (UniqueName: \"kubernetes.io/projected/f9c63e1a-3bc5-4367-8fba-b4c574ba5592-kube-api-access-slmcd\") pod \"openstack-operator-index-g74wc\" (UID: \"f9c63e1a-3bc5-4367-8fba-b4c574ba5592\") " pod="openstack-operators/openstack-operator-index-g74wc" Jan 27 12:29:41 crc kubenswrapper[4745]: I0127 12:29:41.192554 4745 generic.go:334] "Generic (PLEG): container finished" podID="9a95157d-c182-4ccc-a603-e314f81ac762" containerID="10dcea5e14b78fbce7afcfecbd2bd991b4b1b3fe66c5b066c198d51cd32d2f02" exitCode=0 Jan 27 12:29:41 crc kubenswrapper[4745]: I0127 12:29:41.192611 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6tfdz" event={"ID":"9a95157d-c182-4ccc-a603-e314f81ac762","Type":"ContainerDied","Data":"10dcea5e14b78fbce7afcfecbd2bd991b4b1b3fe66c5b066c198d51cd32d2f02"} Jan 27 12:29:41 crc kubenswrapper[4745]: I0127 12:29:41.215690 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slmcd\" (UniqueName: \"kubernetes.io/projected/f9c63e1a-3bc5-4367-8fba-b4c574ba5592-kube-api-access-slmcd\") pod \"openstack-operator-index-g74wc\" (UID: \"f9c63e1a-3bc5-4367-8fba-b4c574ba5592\") " pod="openstack-operators/openstack-operator-index-g74wc" Jan 27 12:29:41 crc kubenswrapper[4745]: I0127 12:29:41.268885 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g74wc" Jan 27 12:29:41 crc kubenswrapper[4745]: I0127 12:29:41.916473 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g74wc"] Jan 27 12:29:41 crc kubenswrapper[4745]: W0127 12:29:41.923532 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9c63e1a_3bc5_4367_8fba_b4c574ba5592.slice/crio-a061cb5be2988a8e4d61c4019b26c7262be190f83632534474c1e223fe2d8ace WatchSource:0}: Error finding container a061cb5be2988a8e4d61c4019b26c7262be190f83632534474c1e223fe2d8ace: Status 404 returned error can't find the container with id a061cb5be2988a8e4d61c4019b26c7262be190f83632534474c1e223fe2d8ace Jan 27 12:29:42 crc kubenswrapper[4745]: I0127 12:29:42.201675 4745 generic.go:334] "Generic (PLEG): container finished" podID="9a95157d-c182-4ccc-a603-e314f81ac762" containerID="38fbf03f3242f6257a8340b96937e77a03128b8bb276acf2f024ae6eff33358a" exitCode=0 Jan 27 12:29:42 crc kubenswrapper[4745]: I0127 12:29:42.201956 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6tfdz" event={"ID":"9a95157d-c182-4ccc-a603-e314f81ac762","Type":"ContainerDied","Data":"38fbf03f3242f6257a8340b96937e77a03128b8bb276acf2f024ae6eff33358a"} Jan 27 12:29:42 crc kubenswrapper[4745]: I0127 12:29:42.203004 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g74wc" event={"ID":"f9c63e1a-3bc5-4367-8fba-b4c574ba5592","Type":"ContainerStarted","Data":"a061cb5be2988a8e4d61c4019b26c7262be190f83632534474c1e223fe2d8ace"} Jan 27 12:29:43 crc kubenswrapper[4745]: I0127 12:29:43.213430 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6tfdz" event={"ID":"9a95157d-c182-4ccc-a603-e314f81ac762","Type":"ContainerStarted","Data":"99a20b958a4c0af5c57db4439173baa6b3d14bab25694d59a93115a43bd2701c"} Jan 27 12:29:43 crc kubenswrapper[4745]: I0127 12:29:43.213756 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6tfdz" event={"ID":"9a95157d-c182-4ccc-a603-e314f81ac762","Type":"ContainerStarted","Data":"020673f3b029af66fd4c76c3f1da74ed59fd8853f1cf659c8e4f8510aba68a47"} Jan 27 12:29:43 crc kubenswrapper[4745]: I0127 12:29:43.213769 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6tfdz" event={"ID":"9a95157d-c182-4ccc-a603-e314f81ac762","Type":"ContainerStarted","Data":"b3933cfbe2bbbbed39b6c0aa8ba8e3c7db4817d37def80b22cc36c398ac96bc6"} Jan 27 12:29:43 crc kubenswrapper[4745]: I0127 12:29:43.213778 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6tfdz" event={"ID":"9a95157d-c182-4ccc-a603-e314f81ac762","Type":"ContainerStarted","Data":"644b7cf52e21616be86b649e2ed61b219efaa9e447ae656a210745626745a66d"} Jan 27 12:29:44 crc kubenswrapper[4745]: I0127 12:29:44.222960 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6tfdz" event={"ID":"9a95157d-c182-4ccc-a603-e314f81ac762","Type":"ContainerStarted","Data":"01f76ff45c4814cbc4668a94c8515ce3bf8bc8a1bb91730b99d9b434012bb00a"} Jan 27 12:29:44 crc kubenswrapper[4745]: I0127 12:29:44.223229 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6tfdz" event={"ID":"9a95157d-c182-4ccc-a603-e314f81ac762","Type":"ContainerStarted","Data":"9c0753938f0b02ce142c25f0f17abcc284660021692ac199fea6d63317cded2f"} Jan 27 12:29:44 crc kubenswrapper[4745]: I0127 12:29:44.224191 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:44 crc kubenswrapper[4745]: I0127 12:29:44.273718 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-6tfdz" podStartSLOduration=10.220096233 podStartE2EDuration="24.273703029s" podCreationTimestamp="2026-01-27 12:29:20 +0000 UTC" firstStartedPulling="2026-01-27 12:29:20.87311093 +0000 UTC m=+1053.678021618" lastFinishedPulling="2026-01-27 12:29:34.926717726 +0000 UTC m=+1067.731628414" observedRunningTime="2026-01-27 12:29:44.272078742 +0000 UTC m=+1077.076989430" watchObservedRunningTime="2026-01-27 12:29:44.273703029 +0000 UTC m=+1077.078613717" Jan 27 12:29:45 crc kubenswrapper[4745]: I0127 12:29:45.643209 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:45 crc kubenswrapper[4745]: I0127 12:29:45.701836 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:29:50 crc kubenswrapper[4745]: I0127 12:29:50.278336 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g74wc" event={"ID":"f9c63e1a-3bc5-4367-8fba-b4c574ba5592","Type":"ContainerStarted","Data":"973c7235a953a56d5ff945249582eaae2b6cbbd44c6c77c8d13634e9d9a21fac"} Jan 27 12:29:50 crc kubenswrapper[4745]: I0127 12:29:50.279910 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8qkvx" event={"ID":"d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e","Type":"ContainerStarted","Data":"01e960d33a4651cb8f66bfece5e4b25da34361bc3bcf812a72cdd1f448b01853"} Jan 27 12:29:50 crc kubenswrapper[4745]: I0127 12:29:50.279983 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-8qkvx" podUID="d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e" containerName="registry-server" containerID="cri-o://01e960d33a4651cb8f66bfece5e4b25da34361bc3bcf812a72cdd1f448b01853" gracePeriod=2 Jan 27 12:29:50 crc kubenswrapper[4745]: I0127 12:29:50.298236 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-g74wc" podStartSLOduration=3.020370756 podStartE2EDuration="10.298198678s" podCreationTimestamp="2026-01-27 12:29:40 +0000 UTC" firstStartedPulling="2026-01-27 12:29:41.924658431 +0000 UTC m=+1074.729569129" lastFinishedPulling="2026-01-27 12:29:49.202486363 +0000 UTC m=+1082.007397051" observedRunningTime="2026-01-27 12:29:50.294666826 +0000 UTC m=+1083.099577544" watchObservedRunningTime="2026-01-27 12:29:50.298198678 +0000 UTC m=+1083.103109366" Jan 27 12:29:50 crc kubenswrapper[4745]: I0127 12:29:50.324136 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8qkvx" podStartSLOduration=2.520764013 podStartE2EDuration="15.324115106s" podCreationTimestamp="2026-01-27 12:29:35 +0000 UTC" firstStartedPulling="2026-01-27 12:29:36.419992842 +0000 UTC m=+1069.224903530" lastFinishedPulling="2026-01-27 12:29:49.223343935 +0000 UTC m=+1082.028254623" observedRunningTime="2026-01-27 12:29:50.311245625 +0000 UTC m=+1083.116156313" watchObservedRunningTime="2026-01-27 12:29:50.324115106 +0000 UTC m=+1083.129025794" Jan 27 12:29:50 crc kubenswrapper[4745]: I0127 12:29:50.722321 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8qkvx" Jan 27 12:29:50 crc kubenswrapper[4745]: I0127 12:29:50.845582 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pqtr\" (UniqueName: \"kubernetes.io/projected/d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e-kube-api-access-4pqtr\") pod \"d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e\" (UID: \"d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e\") " Jan 27 12:29:50 crc kubenswrapper[4745]: I0127 12:29:50.850702 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e-kube-api-access-4pqtr" (OuterVolumeSpecName: "kube-api-access-4pqtr") pod "d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e" (UID: "d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e"). InnerVolumeSpecName "kube-api-access-4pqtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:29:50 crc kubenswrapper[4745]: I0127 12:29:50.947634 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pqtr\" (UniqueName: \"kubernetes.io/projected/d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e-kube-api-access-4pqtr\") on node \"crc\" DevicePath \"\"" Jan 27 12:29:51 crc kubenswrapper[4745]: I0127 12:29:51.254315 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q97lc" Jan 27 12:29:51 crc kubenswrapper[4745]: I0127 12:29:51.269377 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-g74wc" Jan 27 12:29:51 crc kubenswrapper[4745]: I0127 12:29:51.269421 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-g74wc" Jan 27 12:29:51 crc kubenswrapper[4745]: I0127 12:29:51.290994 4745 generic.go:334] "Generic (PLEG): container finished" podID="d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e" containerID="01e960d33a4651cb8f66bfece5e4b25da34361bc3bcf812a72cdd1f448b01853" exitCode=0 Jan 27 12:29:51 crc kubenswrapper[4745]: I0127 12:29:51.291223 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8qkvx" event={"ID":"d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e","Type":"ContainerDied","Data":"01e960d33a4651cb8f66bfece5e4b25da34361bc3bcf812a72cdd1f448b01853"} Jan 27 12:29:51 crc kubenswrapper[4745]: I0127 12:29:51.291862 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8qkvx" event={"ID":"d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e","Type":"ContainerDied","Data":"fb89b5c5452eac9d7dc79e89d5b50d1558a837e4aad7c0aa54b2aadb26c10657"} Jan 27 12:29:51 crc kubenswrapper[4745]: I0127 12:29:51.291937 4745 scope.go:117] "RemoveContainer" containerID="01e960d33a4651cb8f66bfece5e4b25da34361bc3bcf812a72cdd1f448b01853" Jan 27 12:29:51 crc kubenswrapper[4745]: I0127 12:29:51.291307 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8qkvx" Jan 27 12:29:51 crc kubenswrapper[4745]: I0127 12:29:51.316468 4745 scope.go:117] "RemoveContainer" containerID="01e960d33a4651cb8f66bfece5e4b25da34361bc3bcf812a72cdd1f448b01853" Jan 27 12:29:51 crc kubenswrapper[4745]: I0127 12:29:51.316832 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-g74wc" Jan 27 12:29:51 crc kubenswrapper[4745]: E0127 12:29:51.317296 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01e960d33a4651cb8f66bfece5e4b25da34361bc3bcf812a72cdd1f448b01853\": container with ID starting with 01e960d33a4651cb8f66bfece5e4b25da34361bc3bcf812a72cdd1f448b01853 not found: ID does not exist" containerID="01e960d33a4651cb8f66bfece5e4b25da34361bc3bcf812a72cdd1f448b01853" Jan 27 12:29:51 crc kubenswrapper[4745]: I0127 12:29:51.317370 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01e960d33a4651cb8f66bfece5e4b25da34361bc3bcf812a72cdd1f448b01853"} err="failed to get container status \"01e960d33a4651cb8f66bfece5e4b25da34361bc3bcf812a72cdd1f448b01853\": rpc error: code = NotFound desc = could not find container \"01e960d33a4651cb8f66bfece5e4b25da34361bc3bcf812a72cdd1f448b01853\": container with ID starting with 01e960d33a4651cb8f66bfece5e4b25da34361bc3bcf812a72cdd1f448b01853 not found: ID does not exist" Jan 27 12:29:51 crc kubenswrapper[4745]: I0127 12:29:51.345585 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-8qkvx"] Jan 27 12:29:51 crc kubenswrapper[4745]: I0127 12:29:51.350057 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-8qkvx"] Jan 27 12:29:52 crc kubenswrapper[4745]: I0127 12:29:52.084361 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e" path="/var/lib/kubelet/pods/d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e/volumes" Jan 27 12:30:00 crc kubenswrapper[4745]: I0127 12:30:00.142489 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491950-2k8q9"] Jan 27 12:30:00 crc kubenswrapper[4745]: E0127 12:30:00.142974 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e" containerName="registry-server" Jan 27 12:30:00 crc kubenswrapper[4745]: I0127 12:30:00.142988 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e" containerName="registry-server" Jan 27 12:30:00 crc kubenswrapper[4745]: I0127 12:30:00.143114 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2cbfbcd-9554-45c8-8d20-57fe4cd44d9e" containerName="registry-server" Jan 27 12:30:00 crc kubenswrapper[4745]: I0127 12:30:00.143581 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491950-2k8q9" Jan 27 12:30:00 crc kubenswrapper[4745]: I0127 12:30:00.146738 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 12:30:00 crc kubenswrapper[4745]: I0127 12:30:00.147142 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 12:30:00 crc kubenswrapper[4745]: I0127 12:30:00.153942 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491950-2k8q9"] Jan 27 12:30:00 crc kubenswrapper[4745]: I0127 12:30:00.300619 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/362ff88a-01a5-46fe-8c12-15247d5b2028-secret-volume\") pod \"collect-profiles-29491950-2k8q9\" (UID: \"362ff88a-01a5-46fe-8c12-15247d5b2028\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491950-2k8q9" Jan 27 12:30:00 crc kubenswrapper[4745]: I0127 12:30:00.300675 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/362ff88a-01a5-46fe-8c12-15247d5b2028-config-volume\") pod \"collect-profiles-29491950-2k8q9\" (UID: \"362ff88a-01a5-46fe-8c12-15247d5b2028\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491950-2k8q9" Jan 27 12:30:00 crc kubenswrapper[4745]: I0127 12:30:00.300874 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmm9p\" (UniqueName: \"kubernetes.io/projected/362ff88a-01a5-46fe-8c12-15247d5b2028-kube-api-access-hmm9p\") pod \"collect-profiles-29491950-2k8q9\" (UID: \"362ff88a-01a5-46fe-8c12-15247d5b2028\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491950-2k8q9" Jan 27 12:30:00 crc kubenswrapper[4745]: I0127 12:30:00.402977 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmm9p\" (UniqueName: \"kubernetes.io/projected/362ff88a-01a5-46fe-8c12-15247d5b2028-kube-api-access-hmm9p\") pod \"collect-profiles-29491950-2k8q9\" (UID: \"362ff88a-01a5-46fe-8c12-15247d5b2028\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491950-2k8q9" Jan 27 12:30:00 crc kubenswrapper[4745]: I0127 12:30:00.403087 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/362ff88a-01a5-46fe-8c12-15247d5b2028-secret-volume\") pod \"collect-profiles-29491950-2k8q9\" (UID: \"362ff88a-01a5-46fe-8c12-15247d5b2028\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491950-2k8q9" Jan 27 12:30:00 crc kubenswrapper[4745]: I0127 12:30:00.403123 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/362ff88a-01a5-46fe-8c12-15247d5b2028-config-volume\") pod \"collect-profiles-29491950-2k8q9\" (UID: \"362ff88a-01a5-46fe-8c12-15247d5b2028\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491950-2k8q9" Jan 27 12:30:00 crc kubenswrapper[4745]: I0127 12:30:00.404030 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/362ff88a-01a5-46fe-8c12-15247d5b2028-config-volume\") pod \"collect-profiles-29491950-2k8q9\" (UID: \"362ff88a-01a5-46fe-8c12-15247d5b2028\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491950-2k8q9" Jan 27 12:30:00 crc kubenswrapper[4745]: I0127 12:30:00.410851 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/362ff88a-01a5-46fe-8c12-15247d5b2028-secret-volume\") pod \"collect-profiles-29491950-2k8q9\" (UID: \"362ff88a-01a5-46fe-8c12-15247d5b2028\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491950-2k8q9" Jan 27 12:30:00 crc kubenswrapper[4745]: I0127 12:30:00.421392 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmm9p\" (UniqueName: \"kubernetes.io/projected/362ff88a-01a5-46fe-8c12-15247d5b2028-kube-api-access-hmm9p\") pod \"collect-profiles-29491950-2k8q9\" (UID: \"362ff88a-01a5-46fe-8c12-15247d5b2028\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491950-2k8q9" Jan 27 12:30:00 crc kubenswrapper[4745]: I0127 12:30:00.462850 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491950-2k8q9" Jan 27 12:30:00 crc kubenswrapper[4745]: I0127 12:30:00.645106 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-6tfdz" Jan 27 12:30:00 crc kubenswrapper[4745]: I0127 12:30:00.882295 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491950-2k8q9"] Jan 27 12:30:01 crc kubenswrapper[4745]: I0127 12:30:01.299895 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-g74wc" Jan 27 12:30:01 crc kubenswrapper[4745]: I0127 12:30:01.369969 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491950-2k8q9" event={"ID":"362ff88a-01a5-46fe-8c12-15247d5b2028","Type":"ContainerStarted","Data":"6a6f837ac4fd59ecfbd3b991ba7facf250c02384def44bceb27c3f3d7d764dbe"} Jan 27 12:30:02 crc kubenswrapper[4745]: I0127 12:30:02.377072 4745 generic.go:334] "Generic (PLEG): container finished" podID="362ff88a-01a5-46fe-8c12-15247d5b2028" containerID="d43360d1d9c77a0c77ff613bcb9789449819420594f5ea8c58431d7b9ab0fa12" exitCode=0 Jan 27 12:30:02 crc kubenswrapper[4745]: I0127 12:30:02.377341 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491950-2k8q9" event={"ID":"362ff88a-01a5-46fe-8c12-15247d5b2028","Type":"ContainerDied","Data":"d43360d1d9c77a0c77ff613bcb9789449819420594f5ea8c58431d7b9ab0fa12"} Jan 27 12:30:03 crc kubenswrapper[4745]: I0127 12:30:03.715155 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491950-2k8q9" Jan 27 12:30:03 crc kubenswrapper[4745]: I0127 12:30:03.880768 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmm9p\" (UniqueName: \"kubernetes.io/projected/362ff88a-01a5-46fe-8c12-15247d5b2028-kube-api-access-hmm9p\") pod \"362ff88a-01a5-46fe-8c12-15247d5b2028\" (UID: \"362ff88a-01a5-46fe-8c12-15247d5b2028\") " Jan 27 12:30:03 crc kubenswrapper[4745]: I0127 12:30:03.880988 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/362ff88a-01a5-46fe-8c12-15247d5b2028-secret-volume\") pod \"362ff88a-01a5-46fe-8c12-15247d5b2028\" (UID: \"362ff88a-01a5-46fe-8c12-15247d5b2028\") " Jan 27 12:30:03 crc kubenswrapper[4745]: I0127 12:30:03.881045 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/362ff88a-01a5-46fe-8c12-15247d5b2028-config-volume\") pod \"362ff88a-01a5-46fe-8c12-15247d5b2028\" (UID: \"362ff88a-01a5-46fe-8c12-15247d5b2028\") " Jan 27 12:30:03 crc kubenswrapper[4745]: I0127 12:30:03.882026 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/362ff88a-01a5-46fe-8c12-15247d5b2028-config-volume" (OuterVolumeSpecName: "config-volume") pod "362ff88a-01a5-46fe-8c12-15247d5b2028" (UID: "362ff88a-01a5-46fe-8c12-15247d5b2028"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:30:03 crc kubenswrapper[4745]: I0127 12:30:03.885486 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/362ff88a-01a5-46fe-8c12-15247d5b2028-kube-api-access-hmm9p" (OuterVolumeSpecName: "kube-api-access-hmm9p") pod "362ff88a-01a5-46fe-8c12-15247d5b2028" (UID: "362ff88a-01a5-46fe-8c12-15247d5b2028"). InnerVolumeSpecName "kube-api-access-hmm9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:30:03 crc kubenswrapper[4745]: I0127 12:30:03.885776 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/362ff88a-01a5-46fe-8c12-15247d5b2028-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "362ff88a-01a5-46fe-8c12-15247d5b2028" (UID: "362ff88a-01a5-46fe-8c12-15247d5b2028"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:30:03 crc kubenswrapper[4745]: I0127 12:30:03.983069 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmm9p\" (UniqueName: \"kubernetes.io/projected/362ff88a-01a5-46fe-8c12-15247d5b2028-kube-api-access-hmm9p\") on node \"crc\" DevicePath \"\"" Jan 27 12:30:03 crc kubenswrapper[4745]: I0127 12:30:03.983119 4745 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/362ff88a-01a5-46fe-8c12-15247d5b2028-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 12:30:03 crc kubenswrapper[4745]: I0127 12:30:03.983132 4745 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/362ff88a-01a5-46fe-8c12-15247d5b2028-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 12:30:04 crc kubenswrapper[4745]: I0127 12:30:04.391098 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491950-2k8q9" event={"ID":"362ff88a-01a5-46fe-8c12-15247d5b2028","Type":"ContainerDied","Data":"6a6f837ac4fd59ecfbd3b991ba7facf250c02384def44bceb27c3f3d7d764dbe"} Jan 27 12:30:04 crc kubenswrapper[4745]: I0127 12:30:04.391137 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a6f837ac4fd59ecfbd3b991ba7facf250c02384def44bceb27c3f3d7d764dbe" Jan 27 12:30:04 crc kubenswrapper[4745]: I0127 12:30:04.391140 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491950-2k8q9" Jan 27 12:30:05 crc kubenswrapper[4745]: I0127 12:30:05.967850 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:30:05 crc kubenswrapper[4745]: I0127 12:30:05.967919 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:30:05 crc kubenswrapper[4745]: I0127 12:30:05.967975 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:30:05 crc kubenswrapper[4745]: I0127 12:30:05.968711 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4bf427099ba09136d50759e57a90c739bd38ee9b8bfd72165c113357c32a5692"} pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 12:30:05 crc kubenswrapper[4745]: I0127 12:30:05.968828 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" containerID="cri-o://4bf427099ba09136d50759e57a90c739bd38ee9b8bfd72165c113357c32a5692" gracePeriod=600 Jan 27 12:30:06 crc kubenswrapper[4745]: I0127 12:30:06.406587 4745 generic.go:334] "Generic (PLEG): container finished" podID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerID="4bf427099ba09136d50759e57a90c739bd38ee9b8bfd72165c113357c32a5692" exitCode=0 Jan 27 12:30:06 crc kubenswrapper[4745]: I0127 12:30:06.406678 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerDied","Data":"4bf427099ba09136d50759e57a90c739bd38ee9b8bfd72165c113357c32a5692"} Jan 27 12:30:06 crc kubenswrapper[4745]: I0127 12:30:06.406751 4745 scope.go:117] "RemoveContainer" containerID="0a05fb56de3f4f4964f4b329a07b5860a6f3e32e5425eaf7e81fdbe26e1e74c6" Jan 27 12:30:07 crc kubenswrapper[4745]: I0127 12:30:07.416529 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerStarted","Data":"b73037367855afd08946b74fe618bef765bb998591e2496436c39bc0f24265e8"} Jan 27 12:30:11 crc kubenswrapper[4745]: I0127 12:30:11.986196 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt"] Jan 27 12:30:11 crc kubenswrapper[4745]: E0127 12:30:11.992191 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="362ff88a-01a5-46fe-8c12-15247d5b2028" containerName="collect-profiles" Jan 27 12:30:11 crc kubenswrapper[4745]: I0127 12:30:11.992227 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="362ff88a-01a5-46fe-8c12-15247d5b2028" containerName="collect-profiles" Jan 27 12:30:11 crc kubenswrapper[4745]: I0127 12:30:11.992388 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="362ff88a-01a5-46fe-8c12-15247d5b2028" containerName="collect-profiles" Jan 27 12:30:11 crc kubenswrapper[4745]: I0127 12:30:11.993342 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt" Jan 27 12:30:11 crc kubenswrapper[4745]: I0127 12:30:11.993751 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt"] Jan 27 12:30:11 crc kubenswrapper[4745]: I0127 12:30:11.995590 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-lsgpg" Jan 27 12:30:12 crc kubenswrapper[4745]: I0127 12:30:12.087251 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6975204-2c25-460d-945c-61061b38a981-util\") pod \"5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt\" (UID: \"f6975204-2c25-460d-945c-61061b38a981\") " pod="openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt" Jan 27 12:30:12 crc kubenswrapper[4745]: I0127 12:30:12.087543 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmrdf\" (UniqueName: \"kubernetes.io/projected/f6975204-2c25-460d-945c-61061b38a981-kube-api-access-gmrdf\") pod \"5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt\" (UID: \"f6975204-2c25-460d-945c-61061b38a981\") " pod="openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt" Jan 27 12:30:12 crc kubenswrapper[4745]: I0127 12:30:12.087697 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6975204-2c25-460d-945c-61061b38a981-bundle\") pod \"5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt\" (UID: \"f6975204-2c25-460d-945c-61061b38a981\") " pod="openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt" Jan 27 12:30:12 crc kubenswrapper[4745]: I0127 12:30:12.189625 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6975204-2c25-460d-945c-61061b38a981-util\") pod \"5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt\" (UID: \"f6975204-2c25-460d-945c-61061b38a981\") " pod="openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt" Jan 27 12:30:12 crc kubenswrapper[4745]: I0127 12:30:12.189704 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmrdf\" (UniqueName: \"kubernetes.io/projected/f6975204-2c25-460d-945c-61061b38a981-kube-api-access-gmrdf\") pod \"5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt\" (UID: \"f6975204-2c25-460d-945c-61061b38a981\") " pod="openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt" Jan 27 12:30:12 crc kubenswrapper[4745]: I0127 12:30:12.189754 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6975204-2c25-460d-945c-61061b38a981-bundle\") pod \"5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt\" (UID: \"f6975204-2c25-460d-945c-61061b38a981\") " pod="openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt" Jan 27 12:30:12 crc kubenswrapper[4745]: I0127 12:30:12.190385 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6975204-2c25-460d-945c-61061b38a981-bundle\") pod \"5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt\" (UID: \"f6975204-2c25-460d-945c-61061b38a981\") " pod="openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt" Jan 27 12:30:12 crc kubenswrapper[4745]: I0127 12:30:12.190610 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6975204-2c25-460d-945c-61061b38a981-util\") pod \"5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt\" (UID: \"f6975204-2c25-460d-945c-61061b38a981\") " pod="openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt" Jan 27 12:30:12 crc kubenswrapper[4745]: I0127 12:30:12.210996 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmrdf\" (UniqueName: \"kubernetes.io/projected/f6975204-2c25-460d-945c-61061b38a981-kube-api-access-gmrdf\") pod \"5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt\" (UID: \"f6975204-2c25-460d-945c-61061b38a981\") " pod="openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt" Jan 27 12:30:12 crc kubenswrapper[4745]: I0127 12:30:12.316827 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt" Jan 27 12:30:12 crc kubenswrapper[4745]: I0127 12:30:12.877867 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt"] Jan 27 12:30:13 crc kubenswrapper[4745]: I0127 12:30:13.465111 4745 generic.go:334] "Generic (PLEG): container finished" podID="f6975204-2c25-460d-945c-61061b38a981" containerID="7564a0e451e649c65f61e9734a17005c7263e5b3e513d9b1e007baa898ef2e07" exitCode=0 Jan 27 12:30:13 crc kubenswrapper[4745]: I0127 12:30:13.465171 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt" event={"ID":"f6975204-2c25-460d-945c-61061b38a981","Type":"ContainerDied","Data":"7564a0e451e649c65f61e9734a17005c7263e5b3e513d9b1e007baa898ef2e07"} Jan 27 12:30:13 crc kubenswrapper[4745]: I0127 12:30:13.465414 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt" event={"ID":"f6975204-2c25-460d-945c-61061b38a981","Type":"ContainerStarted","Data":"486f69e418916d543d1272fd98fb25b90089dc7d8f175c2e56a7aaca0505f035"} Jan 27 12:30:13 crc kubenswrapper[4745]: I0127 12:30:13.467003 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 12:30:14 crc kubenswrapper[4745]: I0127 12:30:14.473666 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt" event={"ID":"f6975204-2c25-460d-945c-61061b38a981","Type":"ContainerStarted","Data":"d93506bac53ba3949ceed5f3b56a0b349d2062b21e3789cfdf14f7fde2953782"} Jan 27 12:30:15 crc kubenswrapper[4745]: I0127 12:30:15.482609 4745 generic.go:334] "Generic (PLEG): container finished" podID="f6975204-2c25-460d-945c-61061b38a981" containerID="d93506bac53ba3949ceed5f3b56a0b349d2062b21e3789cfdf14f7fde2953782" exitCode=0 Jan 27 12:30:15 crc kubenswrapper[4745]: I0127 12:30:15.482648 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt" event={"ID":"f6975204-2c25-460d-945c-61061b38a981","Type":"ContainerDied","Data":"d93506bac53ba3949ceed5f3b56a0b349d2062b21e3789cfdf14f7fde2953782"} Jan 27 12:30:16 crc kubenswrapper[4745]: I0127 12:30:16.491058 4745 generic.go:334] "Generic (PLEG): container finished" podID="f6975204-2c25-460d-945c-61061b38a981" containerID="0914f6690491fc6ce5e8e4ada5f2f61d729f85f9570f3becb11cd3640badb455" exitCode=0 Jan 27 12:30:16 crc kubenswrapper[4745]: I0127 12:30:16.491202 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt" event={"ID":"f6975204-2c25-460d-945c-61061b38a981","Type":"ContainerDied","Data":"0914f6690491fc6ce5e8e4ada5f2f61d729f85f9570f3becb11cd3640badb455"} Jan 27 12:30:17 crc kubenswrapper[4745]: I0127 12:30:17.730709 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt" Jan 27 12:30:17 crc kubenswrapper[4745]: I0127 12:30:17.862104 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmrdf\" (UniqueName: \"kubernetes.io/projected/f6975204-2c25-460d-945c-61061b38a981-kube-api-access-gmrdf\") pod \"f6975204-2c25-460d-945c-61061b38a981\" (UID: \"f6975204-2c25-460d-945c-61061b38a981\") " Jan 27 12:30:17 crc kubenswrapper[4745]: I0127 12:30:17.862184 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6975204-2c25-460d-945c-61061b38a981-bundle\") pod \"f6975204-2c25-460d-945c-61061b38a981\" (UID: \"f6975204-2c25-460d-945c-61061b38a981\") " Jan 27 12:30:17 crc kubenswrapper[4745]: I0127 12:30:17.862315 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6975204-2c25-460d-945c-61061b38a981-util\") pod \"f6975204-2c25-460d-945c-61061b38a981\" (UID: \"f6975204-2c25-460d-945c-61061b38a981\") " Jan 27 12:30:17 crc kubenswrapper[4745]: I0127 12:30:17.863055 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6975204-2c25-460d-945c-61061b38a981-bundle" (OuterVolumeSpecName: "bundle") pod "f6975204-2c25-460d-945c-61061b38a981" (UID: "f6975204-2c25-460d-945c-61061b38a981"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:30:17 crc kubenswrapper[4745]: I0127 12:30:17.867513 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6975204-2c25-460d-945c-61061b38a981-kube-api-access-gmrdf" (OuterVolumeSpecName: "kube-api-access-gmrdf") pod "f6975204-2c25-460d-945c-61061b38a981" (UID: "f6975204-2c25-460d-945c-61061b38a981"). InnerVolumeSpecName "kube-api-access-gmrdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:30:17 crc kubenswrapper[4745]: I0127 12:30:17.877517 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6975204-2c25-460d-945c-61061b38a981-util" (OuterVolumeSpecName: "util") pod "f6975204-2c25-460d-945c-61061b38a981" (UID: "f6975204-2c25-460d-945c-61061b38a981"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:30:17 crc kubenswrapper[4745]: I0127 12:30:17.964174 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmrdf\" (UniqueName: \"kubernetes.io/projected/f6975204-2c25-460d-945c-61061b38a981-kube-api-access-gmrdf\") on node \"crc\" DevicePath \"\"" Jan 27 12:30:17 crc kubenswrapper[4745]: I0127 12:30:17.964248 4745 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6975204-2c25-460d-945c-61061b38a981-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 12:30:17 crc kubenswrapper[4745]: I0127 12:30:17.964258 4745 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6975204-2c25-460d-945c-61061b38a981-util\") on node \"crc\" DevicePath \"\"" Jan 27 12:30:18 crc kubenswrapper[4745]: I0127 12:30:18.505241 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt" event={"ID":"f6975204-2c25-460d-945c-61061b38a981","Type":"ContainerDied","Data":"486f69e418916d543d1272fd98fb25b90089dc7d8f175c2e56a7aaca0505f035"} Jan 27 12:30:18 crc kubenswrapper[4745]: I0127 12:30:18.505561 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="486f69e418916d543d1272fd98fb25b90089dc7d8f175c2e56a7aaca0505f035" Jan 27 12:30:18 crc kubenswrapper[4745]: I0127 12:30:18.505484 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt" Jan 27 12:30:24 crc kubenswrapper[4745]: I0127 12:30:24.338480 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7bc74c4864-pgst6"] Jan 27 12:30:24 crc kubenswrapper[4745]: E0127 12:30:24.339229 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6975204-2c25-460d-945c-61061b38a981" containerName="pull" Jan 27 12:30:24 crc kubenswrapper[4745]: I0127 12:30:24.339247 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6975204-2c25-460d-945c-61061b38a981" containerName="pull" Jan 27 12:30:24 crc kubenswrapper[4745]: E0127 12:30:24.339276 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6975204-2c25-460d-945c-61061b38a981" containerName="util" Jan 27 12:30:24 crc kubenswrapper[4745]: I0127 12:30:24.339306 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6975204-2c25-460d-945c-61061b38a981" containerName="util" Jan 27 12:30:24 crc kubenswrapper[4745]: E0127 12:30:24.339320 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6975204-2c25-460d-945c-61061b38a981" containerName="extract" Jan 27 12:30:24 crc kubenswrapper[4745]: I0127 12:30:24.339328 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6975204-2c25-460d-945c-61061b38a981" containerName="extract" Jan 27 12:30:24 crc kubenswrapper[4745]: I0127 12:30:24.339470 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6975204-2c25-460d-945c-61061b38a981" containerName="extract" Jan 27 12:30:24 crc kubenswrapper[4745]: I0127 12:30:24.340053 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7bc74c4864-pgst6" Jan 27 12:30:24 crc kubenswrapper[4745]: I0127 12:30:24.342611 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-mln4r" Jan 27 12:30:24 crc kubenswrapper[4745]: I0127 12:30:24.362969 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7bc74c4864-pgst6"] Jan 27 12:30:24 crc kubenswrapper[4745]: I0127 12:30:24.496528 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lptnq\" (UniqueName: \"kubernetes.io/projected/9a62f602-d717-4a1f-996d-57fa02fbc829-kube-api-access-lptnq\") pod \"openstack-operator-controller-init-7bc74c4864-pgst6\" (UID: \"9a62f602-d717-4a1f-996d-57fa02fbc829\") " pod="openstack-operators/openstack-operator-controller-init-7bc74c4864-pgst6" Jan 27 12:30:24 crc kubenswrapper[4745]: I0127 12:30:24.597966 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lptnq\" (UniqueName: \"kubernetes.io/projected/9a62f602-d717-4a1f-996d-57fa02fbc829-kube-api-access-lptnq\") pod \"openstack-operator-controller-init-7bc74c4864-pgst6\" (UID: \"9a62f602-d717-4a1f-996d-57fa02fbc829\") " pod="openstack-operators/openstack-operator-controller-init-7bc74c4864-pgst6" Jan 27 12:30:24 crc kubenswrapper[4745]: I0127 12:30:24.617621 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lptnq\" (UniqueName: \"kubernetes.io/projected/9a62f602-d717-4a1f-996d-57fa02fbc829-kube-api-access-lptnq\") pod \"openstack-operator-controller-init-7bc74c4864-pgst6\" (UID: \"9a62f602-d717-4a1f-996d-57fa02fbc829\") " pod="openstack-operators/openstack-operator-controller-init-7bc74c4864-pgst6" Jan 27 12:30:24 crc kubenswrapper[4745]: I0127 12:30:24.662594 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7bc74c4864-pgst6" Jan 27 12:30:24 crc kubenswrapper[4745]: I0127 12:30:24.913204 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7bc74c4864-pgst6"] Jan 27 12:30:24 crc kubenswrapper[4745]: W0127 12:30:24.919769 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a62f602_d717_4a1f_996d_57fa02fbc829.slice/crio-cc1114d246fe254f5c513aa43f6d99af6a05654019787f7ad64f5a1887d27cef WatchSource:0}: Error finding container cc1114d246fe254f5c513aa43f6d99af6a05654019787f7ad64f5a1887d27cef: Status 404 returned error can't find the container with id cc1114d246fe254f5c513aa43f6d99af6a05654019787f7ad64f5a1887d27cef Jan 27 12:30:25 crc kubenswrapper[4745]: I0127 12:30:25.579005 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7bc74c4864-pgst6" event={"ID":"9a62f602-d717-4a1f-996d-57fa02fbc829","Type":"ContainerStarted","Data":"cc1114d246fe254f5c513aa43f6d99af6a05654019787f7ad64f5a1887d27cef"} Jan 27 12:30:33 crc kubenswrapper[4745]: I0127 12:30:33.638574 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7bc74c4864-pgst6" event={"ID":"9a62f602-d717-4a1f-996d-57fa02fbc829","Type":"ContainerStarted","Data":"e8375fdc7ab7e9370915829298e805fb384513c0b17be09b47f2483dbd1d37da"} Jan 27 12:30:33 crc kubenswrapper[4745]: I0127 12:30:33.639231 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7bc74c4864-pgst6" Jan 27 12:30:33 crc kubenswrapper[4745]: I0127 12:30:33.713611 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7bc74c4864-pgst6" podStartSLOduration=1.474576613 podStartE2EDuration="9.713588159s" podCreationTimestamp="2026-01-27 12:30:24 +0000 UTC" firstStartedPulling="2026-01-27 12:30:24.921574797 +0000 UTC m=+1117.726485485" lastFinishedPulling="2026-01-27 12:30:33.160586343 +0000 UTC m=+1125.965497031" observedRunningTime="2026-01-27 12:30:33.708530363 +0000 UTC m=+1126.513441051" watchObservedRunningTime="2026-01-27 12:30:33.713588159 +0000 UTC m=+1126.518498847" Jan 27 12:30:44 crc kubenswrapper[4745]: I0127 12:30:44.665681 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7bc74c4864-pgst6" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.707405 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-ptkxh"] Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.709491 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-ptkxh" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.720609 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-kdbm9"] Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.721630 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-kdbm9" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.727458 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-48cpn" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.727767 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-rclwv" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.738668 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-ptkxh"] Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.756986 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-kdbm9"] Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.758563 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwmwj\" (UniqueName: \"kubernetes.io/projected/7fa2cf33-1cec-4874-8e41-090f3bd0f550-kube-api-access-gwmwj\") pod \"cinder-operator-controller-manager-655bf9cfbb-kdbm9\" (UID: \"7fa2cf33-1cec-4874-8e41-090f3bd0f550\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-kdbm9" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.758622 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw8rd\" (UniqueName: \"kubernetes.io/projected/a545817b-adaf-4966-8472-4a599db84913-kube-api-access-jw8rd\") pod \"barbican-operator-controller-manager-65ff799cfd-ptkxh\" (UID: \"a545817b-adaf-4966-8472-4a599db84913\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-ptkxh" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.769696 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-g429m"] Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.770714 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-g429m" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.781343 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-j7rm8" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.796825 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-g429m"] Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.812937 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-78zrk"] Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.813988 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-78zrk" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.817081 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-vldgn" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.821790 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-7jg6g"] Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.823056 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-7jg6g" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.836769 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-zgwhh" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.836878 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-78zrk"] Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.871093 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwmwj\" (UniqueName: \"kubernetes.io/projected/7fa2cf33-1cec-4874-8e41-090f3bd0f550-kube-api-access-gwmwj\") pod \"cinder-operator-controller-manager-655bf9cfbb-kdbm9\" (UID: \"7fa2cf33-1cec-4874-8e41-090f3bd0f550\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-kdbm9" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.871158 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw8rd\" (UniqueName: \"kubernetes.io/projected/a545817b-adaf-4966-8472-4a599db84913-kube-api-access-jw8rd\") pod \"barbican-operator-controller-manager-65ff799cfd-ptkxh\" (UID: \"a545817b-adaf-4966-8472-4a599db84913\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-ptkxh" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.871193 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-7jg6g"] Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.900895 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kfdwp"] Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.902003 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kfdwp" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.916251 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwmwj\" (UniqueName: \"kubernetes.io/projected/7fa2cf33-1cec-4874-8e41-090f3bd0f550-kube-api-access-gwmwj\") pod \"cinder-operator-controller-manager-655bf9cfbb-kdbm9\" (UID: \"7fa2cf33-1cec-4874-8e41-090f3bd0f550\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-kdbm9" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.916625 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-287r5" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.929085 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw8rd\" (UniqueName: \"kubernetes.io/projected/a545817b-adaf-4966-8472-4a599db84913-kube-api-access-jw8rd\") pod \"barbican-operator-controller-manager-65ff799cfd-ptkxh\" (UID: \"a545817b-adaf-4966-8472-4a599db84913\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-ptkxh" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.939686 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-58k9b"] Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.940657 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-58k9b" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.943105 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.957241 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-h24gp" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.957798 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kfdwp"] Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.975037 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9n2c\" (UniqueName: \"kubernetes.io/projected/5dcdc404-8271-4f68-ab3e-b2158e959c6a-kube-api-access-p9n2c\") pod \"designate-operator-controller-manager-77554cdc5c-g429m\" (UID: \"5dcdc404-8271-4f68-ab3e-b2158e959c6a\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-g429m" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.975080 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4pt5\" (UniqueName: \"kubernetes.io/projected/6e5cee05-93a0-415b-b0f8-12187035f0e0-kube-api-access-t4pt5\") pod \"heat-operator-controller-manager-575ffb885b-7jg6g\" (UID: \"6e5cee05-93a0-415b-b0f8-12187035f0e0\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-7jg6g" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.975136 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr66b\" (UniqueName: \"kubernetes.io/projected/1268d1f9-be48-4d61-8750-d941d0699718-kube-api-access-pr66b\") pod \"glance-operator-controller-manager-67dd55ff59-78zrk\" (UID: \"1268d1f9-be48-4d61-8750-d941d0699718\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-78zrk" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.978883 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-9w75r"] Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.979847 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-9w75r" Jan 27 12:31:02 crc kubenswrapper[4745]: I0127 12:31:02.986675 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-txnxn" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.032936 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-58k9b"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.041847 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-9w75r"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.043608 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-ptkxh" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.062872 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-dmd65"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.076561 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-kdbm9" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.080441 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-dmd65" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.085724 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-bh8cs" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.138961 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4pt5\" (UniqueName: \"kubernetes.io/projected/6e5cee05-93a0-415b-b0f8-12187035f0e0-kube-api-access-t4pt5\") pod \"heat-operator-controller-manager-575ffb885b-7jg6g\" (UID: \"6e5cee05-93a0-415b-b0f8-12187035f0e0\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-7jg6g" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.139025 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9n2c\" (UniqueName: \"kubernetes.io/projected/5dcdc404-8271-4f68-ab3e-b2158e959c6a-kube-api-access-p9n2c\") pod \"designate-operator-controller-manager-77554cdc5c-g429m\" (UID: \"5dcdc404-8271-4f68-ab3e-b2158e959c6a\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-g429m" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.139163 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr66b\" (UniqueName: \"kubernetes.io/projected/1268d1f9-be48-4d61-8750-d941d0699718-kube-api-access-pr66b\") pod \"glance-operator-controller-manager-67dd55ff59-78zrk\" (UID: \"1268d1f9-be48-4d61-8750-d941d0699718\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-78zrk" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.139196 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca2fa659-fb2b-446c-833d-78a0314a8059-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-58k9b\" (UID: \"ca2fa659-fb2b-446c-833d-78a0314a8059\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-58k9b" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.139297 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbrl9\" (UniqueName: \"kubernetes.io/projected/f11a179f-d8d9-4a2b-bce5-5319a44efdb0-kube-api-access-kbrl9\") pod \"horizon-operator-controller-manager-77d5c5b54f-kfdwp\" (UID: \"f11a179f-d8d9-4a2b-bce5-5319a44efdb0\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kfdwp" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.139348 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br8m4\" (UniqueName: \"kubernetes.io/projected/ca2fa659-fb2b-446c-833d-78a0314a8059-kube-api-access-br8m4\") pod \"infra-operator-controller-manager-7d75bc88d5-58k9b\" (UID: \"ca2fa659-fb2b-446c-833d-78a0314a8059\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-58k9b" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.163216 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-dmd65"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.183715 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr66b\" (UniqueName: \"kubernetes.io/projected/1268d1f9-be48-4d61-8750-d941d0699718-kube-api-access-pr66b\") pod \"glance-operator-controller-manager-67dd55ff59-78zrk\" (UID: \"1268d1f9-be48-4d61-8750-d941d0699718\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-78zrk" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.184299 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-hvr5f"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.185455 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-hvr5f" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.191592 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9n2c\" (UniqueName: \"kubernetes.io/projected/5dcdc404-8271-4f68-ab3e-b2158e959c6a-kube-api-access-p9n2c\") pod \"designate-operator-controller-manager-77554cdc5c-g429m\" (UID: \"5dcdc404-8271-4f68-ab3e-b2158e959c6a\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-g429m" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.193122 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-j7rxc" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.195362 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x6r29"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.211067 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-mr876"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.212377 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-mr876" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.213012 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x6r29" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.217721 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-hvr5f"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.221214 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-6k28d" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.223866 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-mr876"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.226480 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-q9wpd" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.227079 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4pt5\" (UniqueName: \"kubernetes.io/projected/6e5cee05-93a0-415b-b0f8-12187035f0e0-kube-api-access-t4pt5\") pod \"heat-operator-controller-manager-575ffb885b-7jg6g\" (UID: \"6e5cee05-93a0-415b-b0f8-12187035f0e0\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-7jg6g" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.237266 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x6r29"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.242609 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-fbd766fb6-57d5j"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.244331 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-fbd766fb6-57d5j" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.246277 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-xr54s" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.249894 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-tzbs9"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.250875 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-tzbs9" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.253102 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-smn8c" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.255624 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-fbd766fb6-57d5j"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.261264 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbrl9\" (UniqueName: \"kubernetes.io/projected/f11a179f-d8d9-4a2b-bce5-5319a44efdb0-kube-api-access-kbrl9\") pod \"horizon-operator-controller-manager-77d5c5b54f-kfdwp\" (UID: \"f11a179f-d8d9-4a2b-bce5-5319a44efdb0\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kfdwp" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.261309 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br8m4\" (UniqueName: \"kubernetes.io/projected/ca2fa659-fb2b-446c-833d-78a0314a8059-kube-api-access-br8m4\") pod \"infra-operator-controller-manager-7d75bc88d5-58k9b\" (UID: \"ca2fa659-fb2b-446c-833d-78a0314a8059\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-58k9b" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.261355 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqd8t\" (UniqueName: \"kubernetes.io/projected/837948f6-a7b7-4895-bc90-c87cce695f25-kube-api-access-qqd8t\") pod \"manila-operator-controller-manager-849fcfbb6b-hvr5f\" (UID: \"837948f6-a7b7-4895-bc90-c87cce695f25\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-hvr5f" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.261393 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gmzk\" (UniqueName: \"kubernetes.io/projected/cc8f3584-bf19-41e4-837a-13afabf31909-kube-api-access-2gmzk\") pod \"keystone-operator-controller-manager-55f684fd56-dmd65\" (UID: \"cc8f3584-bf19-41e4-837a-13afabf31909\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-dmd65" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.261432 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvvq8\" (UniqueName: \"kubernetes.io/projected/8b457f32-c7cd-4113-b4a7-d4e06bc578d3-kube-api-access-lvvq8\") pod \"ironic-operator-controller-manager-768b776ffb-9w75r\" (UID: \"8b457f32-c7cd-4113-b4a7-d4e06bc578d3\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-9w75r" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.261474 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca2fa659-fb2b-446c-833d-78a0314a8059-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-58k9b\" (UID: \"ca2fa659-fb2b-446c-833d-78a0314a8059\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-58k9b" Jan 27 12:31:03 crc kubenswrapper[4745]: E0127 12:31:03.261628 4745 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 12:31:03 crc kubenswrapper[4745]: E0127 12:31:03.261683 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca2fa659-fb2b-446c-833d-78a0314a8059-cert podName:ca2fa659-fb2b-446c-833d-78a0314a8059 nodeName:}" failed. No retries permitted until 2026-01-27 12:31:03.761664797 +0000 UTC m=+1156.566575485 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ca2fa659-fb2b-446c-833d-78a0314a8059-cert") pod "infra-operator-controller-manager-7d75bc88d5-58k9b" (UID: "ca2fa659-fb2b-446c-833d-78a0314a8059") : secret "infra-operator-webhook-server-cert" not found Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.270582 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-tzbs9"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.285056 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-8h6mr"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.286030 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8h6mr" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.291076 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-xh89z" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.291305 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbrl9\" (UniqueName: \"kubernetes.io/projected/f11a179f-d8d9-4a2b-bce5-5319a44efdb0-kube-api-access-kbrl9\") pod \"horizon-operator-controller-manager-77d5c5b54f-kfdwp\" (UID: \"f11a179f-d8d9-4a2b-bce5-5319a44efdb0\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kfdwp" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.292053 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br8m4\" (UniqueName: \"kubernetes.io/projected/ca2fa659-fb2b-446c-833d-78a0314a8059-kube-api-access-br8m4\") pod \"infra-operator-controller-manager-7d75bc88d5-58k9b\" (UID: \"ca2fa659-fb2b-446c-833d-78a0314a8059\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-58k9b" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.303621 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kfdwp" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.312027 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.313115 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.317541 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-8h6mr"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.318700 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-6jhvk" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.326115 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-qwl6n"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.326243 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.327249 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qwl6n" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.334118 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-hg5pv"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.335942 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-hg5pv" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.341544 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-r4zxl" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.341746 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.348082 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-c6cqh" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.352210 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-hg5pv"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.367996 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-zbnj5"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.369397 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqd96\" (UniqueName: \"kubernetes.io/projected/a2316a86-a910-42cd-810f-390a7c26e2e9-kube-api-access-sqd96\") pod \"octavia-operator-controller-manager-7875d7675-tzbs9\" (UID: \"a2316a86-a910-42cd-810f-390a7c26e2e9\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-tzbs9" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.369475 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llzg6\" (UniqueName: \"kubernetes.io/projected/077960c7-14c3-4cc0-8760-681a5e59dd07-kube-api-access-llzg6\") pod \"neutron-operator-controller-manager-7ffd8d76d4-mr876\" (UID: \"077960c7-14c3-4cc0-8760-681a5e59dd07\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-mr876" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.369515 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqd8t\" (UniqueName: \"kubernetes.io/projected/837948f6-a7b7-4895-bc90-c87cce695f25-kube-api-access-qqd8t\") pod \"manila-operator-controller-manager-849fcfbb6b-hvr5f\" (UID: \"837948f6-a7b7-4895-bc90-c87cce695f25\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-hvr5f" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.369556 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kltzn\" (UniqueName: \"kubernetes.io/projected/3339982a-d5be-4486-b767-127e2873d450-kube-api-access-kltzn\") pod \"ovn-operator-controller-manager-6f75f45d54-8h6mr\" (UID: \"3339982a-d5be-4486-b767-127e2873d450\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8h6mr" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.369586 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gmzk\" (UniqueName: \"kubernetes.io/projected/cc8f3584-bf19-41e4-837a-13afabf31909-kube-api-access-2gmzk\") pod \"keystone-operator-controller-manager-55f684fd56-dmd65\" (UID: \"cc8f3584-bf19-41e4-837a-13afabf31909\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-dmd65" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.369624 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvvq8\" (UniqueName: \"kubernetes.io/projected/8b457f32-c7cd-4113-b4a7-d4e06bc578d3-kube-api-access-lvvq8\") pod \"ironic-operator-controller-manager-768b776ffb-9w75r\" (UID: \"8b457f32-c7cd-4113-b4a7-d4e06bc578d3\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-9w75r" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.369723 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6bxl\" (UniqueName: \"kubernetes.io/projected/ec491b6d-0c60-419b-950f-d91af37597a3-kube-api-access-s6bxl\") pod \"nova-operator-controller-manager-fbd766fb6-57d5j\" (UID: \"ec491b6d-0c60-419b-950f-d91af37597a3\") " pod="openstack-operators/nova-operator-controller-manager-fbd766fb6-57d5j" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.369753 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2xbs\" (UniqueName: \"kubernetes.io/projected/c1aa3726-fa0d-487f-b9c4-813b0a72924c-kube-api-access-k2xbs\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-x6r29\" (UID: \"c1aa3726-fa0d-487f-b9c4-813b0a72924c\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x6r29" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.377161 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-zbnj5" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.394871 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-gn8bb" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.396621 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-g429m" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.397340 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-qwl6n"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.407602 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvvq8\" (UniqueName: \"kubernetes.io/projected/8b457f32-c7cd-4113-b4a7-d4e06bc578d3-kube-api-access-lvvq8\") pod \"ironic-operator-controller-manager-768b776ffb-9w75r\" (UID: \"8b457f32-c7cd-4113-b4a7-d4e06bc578d3\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-9w75r" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.413615 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-zbnj5"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.414734 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gmzk\" (UniqueName: \"kubernetes.io/projected/cc8f3584-bf19-41e4-837a-13afabf31909-kube-api-access-2gmzk\") pod \"keystone-operator-controller-manager-55f684fd56-dmd65\" (UID: \"cc8f3584-bf19-41e4-837a-13afabf31909\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-dmd65" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.423329 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-kdgj4"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.424641 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kdgj4" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.427253 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-8hkhk" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.438209 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqd8t\" (UniqueName: \"kubernetes.io/projected/837948f6-a7b7-4895-bc90-c87cce695f25-kube-api-access-qqd8t\") pod \"manila-operator-controller-manager-849fcfbb6b-hvr5f\" (UID: \"837948f6-a7b7-4895-bc90-c87cce695f25\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-hvr5f" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.475066 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-78zrk" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.475946 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kltzn\" (UniqueName: \"kubernetes.io/projected/3339982a-d5be-4486-b767-127e2873d450-kube-api-access-kltzn\") pod \"ovn-operator-controller-manager-6f75f45d54-8h6mr\" (UID: \"3339982a-d5be-4486-b767-127e2873d450\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8h6mr" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.475993 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2ln7\" (UniqueName: \"kubernetes.io/projected/49e5ed64-890e-430d-a177-df3309fb625c-kube-api-access-v2ln7\") pod \"swift-operator-controller-manager-547cbdb99f-hg5pv\" (UID: \"49e5ed64-890e-430d-a177-df3309fb625c\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-hg5pv" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.476023 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fe12909-b3c6-43a8-8c28-1e2e6dd7958f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2\" (UID: \"4fe12909-b3c6-43a8-8c28-1e2e6dd7958f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.476070 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6bxl\" (UniqueName: \"kubernetes.io/projected/ec491b6d-0c60-419b-950f-d91af37597a3-kube-api-access-s6bxl\") pod \"nova-operator-controller-manager-fbd766fb6-57d5j\" (UID: \"ec491b6d-0c60-419b-950f-d91af37597a3\") " pod="openstack-operators/nova-operator-controller-manager-fbd766fb6-57d5j" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.476099 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2xbs\" (UniqueName: \"kubernetes.io/projected/c1aa3726-fa0d-487f-b9c4-813b0a72924c-kube-api-access-k2xbs\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-x6r29\" (UID: \"c1aa3726-fa0d-487f-b9c4-813b0a72924c\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x6r29" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.476151 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqd96\" (UniqueName: \"kubernetes.io/projected/a2316a86-a910-42cd-810f-390a7c26e2e9-kube-api-access-sqd96\") pod \"octavia-operator-controller-manager-7875d7675-tzbs9\" (UID: \"a2316a86-a910-42cd-810f-390a7c26e2e9\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-tzbs9" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.476176 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54n8w\" (UniqueName: \"kubernetes.io/projected/4fe12909-b3c6-43a8-8c28-1e2e6dd7958f-kube-api-access-54n8w\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2\" (UID: \"4fe12909-b3c6-43a8-8c28-1e2e6dd7958f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.476227 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llzg6\" (UniqueName: \"kubernetes.io/projected/077960c7-14c3-4cc0-8760-681a5e59dd07-kube-api-access-llzg6\") pod \"neutron-operator-controller-manager-7ffd8d76d4-mr876\" (UID: \"077960c7-14c3-4cc0-8760-681a5e59dd07\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-mr876" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.476258 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9vfr\" (UniqueName: \"kubernetes.io/projected/2ad946ad-ed35-48d1-96c2-5d5dd65eb01c-kube-api-access-s9vfr\") pod \"telemetry-operator-controller-manager-799bc87c89-zbnj5\" (UID: \"2ad946ad-ed35-48d1-96c2-5d5dd65eb01c\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-zbnj5" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.476304 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4hvh\" (UniqueName: \"kubernetes.io/projected/95ef1084-25bf-4a8c-b758-f3fd81957d2b-kube-api-access-h4hvh\") pod \"placement-operator-controller-manager-79d5ccc684-qwl6n\" (UID: \"95ef1084-25bf-4a8c-b758-f3fd81957d2b\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qwl6n" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.484415 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-7jg6g" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.485007 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-dmd65" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.504159 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6bxl\" (UniqueName: \"kubernetes.io/projected/ec491b6d-0c60-419b-950f-d91af37597a3-kube-api-access-s6bxl\") pod \"nova-operator-controller-manager-fbd766fb6-57d5j\" (UID: \"ec491b6d-0c60-419b-950f-d91af37597a3\") " pod="openstack-operators/nova-operator-controller-manager-fbd766fb6-57d5j" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.508620 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-kdgj4"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.520625 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kltzn\" (UniqueName: \"kubernetes.io/projected/3339982a-d5be-4486-b767-127e2873d450-kube-api-access-kltzn\") pod \"ovn-operator-controller-manager-6f75f45d54-8h6mr\" (UID: \"3339982a-d5be-4486-b767-127e2873d450\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8h6mr" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.540953 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2xbs\" (UniqueName: \"kubernetes.io/projected/c1aa3726-fa0d-487f-b9c4-813b0a72924c-kube-api-access-k2xbs\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-x6r29\" (UID: \"c1aa3726-fa0d-487f-b9c4-813b0a72924c\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x6r29" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.544605 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-hvr5f" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.545571 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llzg6\" (UniqueName: \"kubernetes.io/projected/077960c7-14c3-4cc0-8760-681a5e59dd07-kube-api-access-llzg6\") pod \"neutron-operator-controller-manager-7ffd8d76d4-mr876\" (UID: \"077960c7-14c3-4cc0-8760-681a5e59dd07\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-mr876" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.555952 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-mr876" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.561537 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-d6b8bcbc9-fx8bq"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.578207 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4hvh\" (UniqueName: \"kubernetes.io/projected/95ef1084-25bf-4a8c-b758-f3fd81957d2b-kube-api-access-h4hvh\") pod \"placement-operator-controller-manager-79d5ccc684-qwl6n\" (UID: \"95ef1084-25bf-4a8c-b758-f3fd81957d2b\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qwl6n" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.578257 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2ln7\" (UniqueName: \"kubernetes.io/projected/49e5ed64-890e-430d-a177-df3309fb625c-kube-api-access-v2ln7\") pod \"swift-operator-controller-manager-547cbdb99f-hg5pv\" (UID: \"49e5ed64-890e-430d-a177-df3309fb625c\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-hg5pv" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.578293 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fe12909-b3c6-43a8-8c28-1e2e6dd7958f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2\" (UID: \"4fe12909-b3c6-43a8-8c28-1e2e6dd7958f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.578357 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9t9c\" (UniqueName: \"kubernetes.io/projected/3d4083db-d2df-46b7-8e81-c7dddecc8d21-kube-api-access-l9t9c\") pod \"test-operator-controller-manager-69797bbcbd-kdgj4\" (UID: \"3d4083db-d2df-46b7-8e81-c7dddecc8d21\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kdgj4" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.578418 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54n8w\" (UniqueName: \"kubernetes.io/projected/4fe12909-b3c6-43a8-8c28-1e2e6dd7958f-kube-api-access-54n8w\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2\" (UID: \"4fe12909-b3c6-43a8-8c28-1e2e6dd7958f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.578473 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9vfr\" (UniqueName: \"kubernetes.io/projected/2ad946ad-ed35-48d1-96c2-5d5dd65eb01c-kube-api-access-s9vfr\") pod \"telemetry-operator-controller-manager-799bc87c89-zbnj5\" (UID: \"2ad946ad-ed35-48d1-96c2-5d5dd65eb01c\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-zbnj5" Jan 27 12:31:03 crc kubenswrapper[4745]: E0127 12:31:03.579120 4745 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 12:31:03 crc kubenswrapper[4745]: E0127 12:31:03.579167 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4fe12909-b3c6-43a8-8c28-1e2e6dd7958f-cert podName:4fe12909-b3c6-43a8-8c28-1e2e6dd7958f nodeName:}" failed. No retries permitted until 2026-01-27 12:31:04.079152078 +0000 UTC m=+1156.884062766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4fe12909-b3c6-43a8-8c28-1e2e6dd7958f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" (UID: "4fe12909-b3c6-43a8-8c28-1e2e6dd7958f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.595828 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqd96\" (UniqueName: \"kubernetes.io/projected/a2316a86-a910-42cd-810f-390a7c26e2e9-kube-api-access-sqd96\") pod \"octavia-operator-controller-manager-7875d7675-tzbs9\" (UID: \"a2316a86-a910-42cd-810f-390a7c26e2e9\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-tzbs9" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.600750 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-d6b8bcbc9-fx8bq" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.603792 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-prxcb" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.605887 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x6r29" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.612871 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9vfr\" (UniqueName: \"kubernetes.io/projected/2ad946ad-ed35-48d1-96c2-5d5dd65eb01c-kube-api-access-s9vfr\") pod \"telemetry-operator-controller-manager-799bc87c89-zbnj5\" (UID: \"2ad946ad-ed35-48d1-96c2-5d5dd65eb01c\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-zbnj5" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.632521 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-fbd766fb6-57d5j" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.635990 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2ln7\" (UniqueName: \"kubernetes.io/projected/49e5ed64-890e-430d-a177-df3309fb625c-kube-api-access-v2ln7\") pod \"swift-operator-controller-manager-547cbdb99f-hg5pv\" (UID: \"49e5ed64-890e-430d-a177-df3309fb625c\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-hg5pv" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.641446 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4hvh\" (UniqueName: \"kubernetes.io/projected/95ef1084-25bf-4a8c-b758-f3fd81957d2b-kube-api-access-h4hvh\") pod \"placement-operator-controller-manager-79d5ccc684-qwl6n\" (UID: \"95ef1084-25bf-4a8c-b758-f3fd81957d2b\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qwl6n" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.647584 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-tzbs9" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.648573 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-9w75r" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.656911 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-d6b8bcbc9-fx8bq"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.674658 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54n8w\" (UniqueName: \"kubernetes.io/projected/4fe12909-b3c6-43a8-8c28-1e2e6dd7958f-kube-api-access-54n8w\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2\" (UID: \"4fe12909-b3c6-43a8-8c28-1e2e6dd7958f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.687119 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8h6mr" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.688946 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wz9p\" (UniqueName: \"kubernetes.io/projected/b78df1ec-2307-490b-bf7a-4729381c9b9e-kube-api-access-8wz9p\") pod \"watcher-operator-controller-manager-d6b8bcbc9-fx8bq\" (UID: \"b78df1ec-2307-490b-bf7a-4729381c9b9e\") " pod="openstack-operators/watcher-operator-controller-manager-d6b8bcbc9-fx8bq" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.689082 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9t9c\" (UniqueName: \"kubernetes.io/projected/3d4083db-d2df-46b7-8e81-c7dddecc8d21-kube-api-access-l9t9c\") pod \"test-operator-controller-manager-69797bbcbd-kdgj4\" (UID: \"3d4083db-d2df-46b7-8e81-c7dddecc8d21\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kdgj4" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.722907 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9t9c\" (UniqueName: \"kubernetes.io/projected/3d4083db-d2df-46b7-8e81-c7dddecc8d21-kube-api-access-l9t9c\") pod \"test-operator-controller-manager-69797bbcbd-kdgj4\" (UID: \"3d4083db-d2df-46b7-8e81-c7dddecc8d21\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kdgj4" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.725426 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qwl6n" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.747178 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-hg5pv" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.765232 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.767243 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.773262 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-d2jmp" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.773440 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.774251 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.784427 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.790831 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca2fa659-fb2b-446c-833d-78a0314a8059-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-58k9b\" (UID: \"ca2fa659-fb2b-446c-833d-78a0314a8059\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-58k9b" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.790939 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wz9p\" (UniqueName: \"kubernetes.io/projected/b78df1ec-2307-490b-bf7a-4729381c9b9e-kube-api-access-8wz9p\") pod \"watcher-operator-controller-manager-d6b8bcbc9-fx8bq\" (UID: \"b78df1ec-2307-490b-bf7a-4729381c9b9e\") " pod="openstack-operators/watcher-operator-controller-manager-d6b8bcbc9-fx8bq" Jan 27 12:31:03 crc kubenswrapper[4745]: E0127 12:31:03.791018 4745 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 12:31:03 crc kubenswrapper[4745]: E0127 12:31:03.791099 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca2fa659-fb2b-446c-833d-78a0314a8059-cert podName:ca2fa659-fb2b-446c-833d-78a0314a8059 nodeName:}" failed. No retries permitted until 2026-01-27 12:31:04.791074173 +0000 UTC m=+1157.595984881 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ca2fa659-fb2b-446c-833d-78a0314a8059-cert") pod "infra-operator-controller-manager-7d75bc88d5-58k9b" (UID: "ca2fa659-fb2b-446c-833d-78a0314a8059") : secret "infra-operator-webhook-server-cert" not found Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.822880 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xhm9d"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.823606 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wz9p\" (UniqueName: \"kubernetes.io/projected/b78df1ec-2307-490b-bf7a-4729381c9b9e-kube-api-access-8wz9p\") pod \"watcher-operator-controller-manager-d6b8bcbc9-fx8bq\" (UID: \"b78df1ec-2307-490b-bf7a-4729381c9b9e\") " pod="openstack-operators/watcher-operator-controller-manager-d6b8bcbc9-fx8bq" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.824079 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xhm9d" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.830118 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xhm9d"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.837403 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-qr8w5" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.841666 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-ptkxh"] Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.894403 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-webhook-certs\") pod \"openstack-operator-controller-manager-96bd7847-d5vm4\" (UID: \"e189eaea-4a00-43c3-b92e-36d10aa9b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.894494 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bctzx\" (UniqueName: \"kubernetes.io/projected/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-kube-api-access-bctzx\") pod \"openstack-operator-controller-manager-96bd7847-d5vm4\" (UID: \"e189eaea-4a00-43c3-b92e-36d10aa9b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.894557 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-metrics-certs\") pod \"openstack-operator-controller-manager-96bd7847-d5vm4\" (UID: \"e189eaea-4a00-43c3-b92e-36d10aa9b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.894584 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmdcq\" (UniqueName: \"kubernetes.io/projected/689ac5a4-566b-41df-9c90-d6f7734a2d79-kube-api-access-fmdcq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xhm9d\" (UID: \"689ac5a4-566b-41df-9c90-d6f7734a2d79\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xhm9d" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.904790 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-zbnj5" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.948227 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kdgj4" Jan 27 12:31:03 crc kubenswrapper[4745]: I0127 12:31:03.951841 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-d6b8bcbc9-fx8bq" Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.000574 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-webhook-certs\") pod \"openstack-operator-controller-manager-96bd7847-d5vm4\" (UID: \"e189eaea-4a00-43c3-b92e-36d10aa9b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.000991 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bctzx\" (UniqueName: \"kubernetes.io/projected/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-kube-api-access-bctzx\") pod \"openstack-operator-controller-manager-96bd7847-d5vm4\" (UID: \"e189eaea-4a00-43c3-b92e-36d10aa9b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.001044 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-metrics-certs\") pod \"openstack-operator-controller-manager-96bd7847-d5vm4\" (UID: \"e189eaea-4a00-43c3-b92e-36d10aa9b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.001064 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmdcq\" (UniqueName: \"kubernetes.io/projected/689ac5a4-566b-41df-9c90-d6f7734a2d79-kube-api-access-fmdcq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xhm9d\" (UID: \"689ac5a4-566b-41df-9c90-d6f7734a2d79\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xhm9d" Jan 27 12:31:04 crc kubenswrapper[4745]: E0127 12:31:04.001492 4745 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 12:31:04 crc kubenswrapper[4745]: E0127 12:31:04.001533 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-webhook-certs podName:e189eaea-4a00-43c3-b92e-36d10aa9b6d1 nodeName:}" failed. No retries permitted until 2026-01-27 12:31:04.501520245 +0000 UTC m=+1157.306430933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-webhook-certs") pod "openstack-operator-controller-manager-96bd7847-d5vm4" (UID: "e189eaea-4a00-43c3-b92e-36d10aa9b6d1") : secret "webhook-server-cert" not found Jan 27 12:31:04 crc kubenswrapper[4745]: E0127 12:31:04.001756 4745 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 12:31:04 crc kubenswrapper[4745]: E0127 12:31:04.001787 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-metrics-certs podName:e189eaea-4a00-43c3-b92e-36d10aa9b6d1 nodeName:}" failed. No retries permitted until 2026-01-27 12:31:04.501780083 +0000 UTC m=+1157.306690771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-metrics-certs") pod "openstack-operator-controller-manager-96bd7847-d5vm4" (UID: "e189eaea-4a00-43c3-b92e-36d10aa9b6d1") : secret "metrics-server-cert" not found Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.021130 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kfdwp"] Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.035640 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-kdbm9"] Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.038354 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bctzx\" (UniqueName: \"kubernetes.io/projected/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-kube-api-access-bctzx\") pod \"openstack-operator-controller-manager-96bd7847-d5vm4\" (UID: \"e189eaea-4a00-43c3-b92e-36d10aa9b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.049541 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmdcq\" (UniqueName: \"kubernetes.io/projected/689ac5a4-566b-41df-9c90-d6f7734a2d79-kube-api-access-fmdcq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xhm9d\" (UID: \"689ac5a4-566b-41df-9c90-d6f7734a2d79\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xhm9d" Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.103504 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fe12909-b3c6-43a8-8c28-1e2e6dd7958f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2\" (UID: \"4fe12909-b3c6-43a8-8c28-1e2e6dd7958f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" Jan 27 12:31:04 crc kubenswrapper[4745]: E0127 12:31:04.104014 4745 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 12:31:04 crc kubenswrapper[4745]: E0127 12:31:04.104197 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4fe12909-b3c6-43a8-8c28-1e2e6dd7958f-cert podName:4fe12909-b3c6-43a8-8c28-1e2e6dd7958f nodeName:}" failed. No retries permitted until 2026-01-27 12:31:05.104175357 +0000 UTC m=+1157.909086045 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4fe12909-b3c6-43a8-8c28-1e2e6dd7958f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" (UID: "4fe12909-b3c6-43a8-8c28-1e2e6dd7958f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.180130 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xhm9d" Jan 27 12:31:04 crc kubenswrapper[4745]: W0127 12:31:04.218093 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf11a179f_d8d9_4a2b_bce5_5319a44efdb0.slice/crio-5c30d9bdce2fc7caf111be33bd1be3ac4e5f7d29b57c539a1122649120f488cf WatchSource:0}: Error finding container 5c30d9bdce2fc7caf111be33bd1be3ac4e5f7d29b57c539a1122649120f488cf: Status 404 returned error can't find the container with id 5c30d9bdce2fc7caf111be33bd1be3ac4e5f7d29b57c539a1122649120f488cf Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.518123 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-metrics-certs\") pod \"openstack-operator-controller-manager-96bd7847-d5vm4\" (UID: \"e189eaea-4a00-43c3-b92e-36d10aa9b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.518256 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-webhook-certs\") pod \"openstack-operator-controller-manager-96bd7847-d5vm4\" (UID: \"e189eaea-4a00-43c3-b92e-36d10aa9b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:04 crc kubenswrapper[4745]: E0127 12:31:04.518432 4745 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 12:31:04 crc kubenswrapper[4745]: E0127 12:31:04.518487 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-webhook-certs podName:e189eaea-4a00-43c3-b92e-36d10aa9b6d1 nodeName:}" failed. No retries permitted until 2026-01-27 12:31:05.518469941 +0000 UTC m=+1158.323380629 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-webhook-certs") pod "openstack-operator-controller-manager-96bd7847-d5vm4" (UID: "e189eaea-4a00-43c3-b92e-36d10aa9b6d1") : secret "webhook-server-cert" not found Jan 27 12:31:04 crc kubenswrapper[4745]: E0127 12:31:04.518895 4745 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 12:31:04 crc kubenswrapper[4745]: E0127 12:31:04.518926 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-metrics-certs podName:e189eaea-4a00-43c3-b92e-36d10aa9b6d1 nodeName:}" failed. No retries permitted until 2026-01-27 12:31:05.518917204 +0000 UTC m=+1158.323827892 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-metrics-certs") pod "openstack-operator-controller-manager-96bd7847-d5vm4" (UID: "e189eaea-4a00-43c3-b92e-36d10aa9b6d1") : secret "metrics-server-cert" not found Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.549671 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-g429m"] Jan 27 12:31:04 crc kubenswrapper[4745]: W0127 12:31:04.562178 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dcdc404_8271_4f68_ab3e_b2158e959c6a.slice/crio-9b4e21daebe17d6d26e908b73de7faf2930d99fbe67775778bb650bc50152ef1 WatchSource:0}: Error finding container 9b4e21daebe17d6d26e908b73de7faf2930d99fbe67775778bb650bc50152ef1: Status 404 returned error can't find the container with id 9b4e21daebe17d6d26e908b73de7faf2930d99fbe67775778bb650bc50152ef1 Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.572523 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-7jg6g"] Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.614194 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-78zrk"] Jan 27 12:31:04 crc kubenswrapper[4745]: W0127 12:31:04.665547 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e5cee05_93a0_415b_b0f8_12187035f0e0.slice/crio-d55b903f81c519dc90a504a0e7cf65f702168e375771f1a63c055621d3bb2bd7 WatchSource:0}: Error finding container d55b903f81c519dc90a504a0e7cf65f702168e375771f1a63c055621d3bb2bd7: Status 404 returned error can't find the container with id d55b903f81c519dc90a504a0e7cf65f702168e375771f1a63c055621d3bb2bd7 Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.824083 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca2fa659-fb2b-446c-833d-78a0314a8059-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-58k9b\" (UID: \"ca2fa659-fb2b-446c-833d-78a0314a8059\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-58k9b" Jan 27 12:31:04 crc kubenswrapper[4745]: E0127 12:31:04.824617 4745 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 12:31:04 crc kubenswrapper[4745]: E0127 12:31:04.824666 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca2fa659-fb2b-446c-833d-78a0314a8059-cert podName:ca2fa659-fb2b-446c-833d-78a0314a8059 nodeName:}" failed. No retries permitted until 2026-01-27 12:31:06.824651306 +0000 UTC m=+1159.629561994 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ca2fa659-fb2b-446c-833d-78a0314a8059-cert") pod "infra-operator-controller-manager-7d75bc88d5-58k9b" (UID: "ca2fa659-fb2b-446c-833d-78a0314a8059") : secret "infra-operator-webhook-server-cert" not found Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.915140 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-ptkxh" event={"ID":"a545817b-adaf-4966-8472-4a599db84913","Type":"ContainerStarted","Data":"d15283ee54a7be55569db97bc64167d0232eaaa828e1281f33bc18f05116ba1d"} Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.917060 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kfdwp" event={"ID":"f11a179f-d8d9-4a2b-bce5-5319a44efdb0","Type":"ContainerStarted","Data":"5c30d9bdce2fc7caf111be33bd1be3ac4e5f7d29b57c539a1122649120f488cf"} Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.918749 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-78zrk" event={"ID":"1268d1f9-be48-4d61-8750-d941d0699718","Type":"ContainerStarted","Data":"c7af9578873057847e34400c44929635a98d16a94e9fce64576b1be4d54da606"} Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.919778 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-kdbm9" event={"ID":"7fa2cf33-1cec-4874-8e41-090f3bd0f550","Type":"ContainerStarted","Data":"e189a3f8bfde35f6a7c4954e68718682660c28729267f0b95b2cbe1e953a38a4"} Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.921846 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-g429m" event={"ID":"5dcdc404-8271-4f68-ab3e-b2158e959c6a","Type":"ContainerStarted","Data":"9b4e21daebe17d6d26e908b73de7faf2930d99fbe67775778bb650bc50152ef1"} Jan 27 12:31:04 crc kubenswrapper[4745]: I0127 12:31:04.941413 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-7jg6g" event={"ID":"6e5cee05-93a0-415b-b0f8-12187035f0e0","Type":"ContainerStarted","Data":"d55b903f81c519dc90a504a0e7cf65f702168e375771f1a63c055621d3bb2bd7"} Jan 27 12:31:05 crc kubenswrapper[4745]: I0127 12:31:05.132209 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fe12909-b3c6-43a8-8c28-1e2e6dd7958f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2\" (UID: \"4fe12909-b3c6-43a8-8c28-1e2e6dd7958f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" Jan 27 12:31:05 crc kubenswrapper[4745]: E0127 12:31:05.132356 4745 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 12:31:05 crc kubenswrapper[4745]: E0127 12:31:05.132410 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4fe12909-b3c6-43a8-8c28-1e2e6dd7958f-cert podName:4fe12909-b3c6-43a8-8c28-1e2e6dd7958f nodeName:}" failed. No retries permitted until 2026-01-27 12:31:07.132390345 +0000 UTC m=+1159.937301033 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4fe12909-b3c6-43a8-8c28-1e2e6dd7958f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" (UID: "4fe12909-b3c6-43a8-8c28-1e2e6dd7958f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 12:31:05 crc kubenswrapper[4745]: I0127 12:31:05.538180 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-metrics-certs\") pod \"openstack-operator-controller-manager-96bd7847-d5vm4\" (UID: \"e189eaea-4a00-43c3-b92e-36d10aa9b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:05 crc kubenswrapper[4745]: I0127 12:31:05.538310 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-webhook-certs\") pod \"openstack-operator-controller-manager-96bd7847-d5vm4\" (UID: \"e189eaea-4a00-43c3-b92e-36d10aa9b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:05 crc kubenswrapper[4745]: E0127 12:31:05.538488 4745 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 12:31:05 crc kubenswrapper[4745]: E0127 12:31:05.538548 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-webhook-certs podName:e189eaea-4a00-43c3-b92e-36d10aa9b6d1 nodeName:}" failed. No retries permitted until 2026-01-27 12:31:07.538528194 +0000 UTC m=+1160.343438882 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-webhook-certs") pod "openstack-operator-controller-manager-96bd7847-d5vm4" (UID: "e189eaea-4a00-43c3-b92e-36d10aa9b6d1") : secret "webhook-server-cert" not found Jan 27 12:31:05 crc kubenswrapper[4745]: E0127 12:31:05.538758 4745 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 12:31:05 crc kubenswrapper[4745]: E0127 12:31:05.538842 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-metrics-certs podName:e189eaea-4a00-43c3-b92e-36d10aa9b6d1 nodeName:}" failed. No retries permitted until 2026-01-27 12:31:07.538825622 +0000 UTC m=+1160.343736300 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-metrics-certs") pod "openstack-operator-controller-manager-96bd7847-d5vm4" (UID: "e189eaea-4a00-43c3-b92e-36d10aa9b6d1") : secret "metrics-server-cert" not found Jan 27 12:31:05 crc kubenswrapper[4745]: I0127 12:31:05.571152 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x6r29"] Jan 27 12:31:05 crc kubenswrapper[4745]: I0127 12:31:05.731593 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-qwl6n"] Jan 27 12:31:05 crc kubenswrapper[4745]: I0127 12:31:05.785451 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-zbnj5"] Jan 27 12:31:05 crc kubenswrapper[4745]: I0127 12:31:05.816636 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-fbd766fb6-57d5j"] Jan 27 12:31:05 crc kubenswrapper[4745]: I0127 12:31:05.858745 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-9w75r"] Jan 27 12:31:05 crc kubenswrapper[4745]: W0127 12:31:05.876978 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b457f32_c7cd_4113_b4a7_d4e06bc578d3.slice/crio-97dee02c92e7805a2f0014e6ce1a76502a291a7015f04af9d0034d0f516a1d3f WatchSource:0}: Error finding container 97dee02c92e7805a2f0014e6ce1a76502a291a7015f04af9d0034d0f516a1d3f: Status 404 returned error can't find the container with id 97dee02c92e7805a2f0014e6ce1a76502a291a7015f04af9d0034d0f516a1d3f Jan 27 12:31:05 crc kubenswrapper[4745]: W0127 12:31:05.900284 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec491b6d_0c60_419b_950f_d91af37597a3.slice/crio-eabe5fa9c0949cb33279f00c74b89589866a77f85bf8e6799914dc450d366708 WatchSource:0}: Error finding container eabe5fa9c0949cb33279f00c74b89589866a77f85bf8e6799914dc450d366708: Status 404 returned error can't find the container with id eabe5fa9c0949cb33279f00c74b89589866a77f85bf8e6799914dc450d366708 Jan 27 12:31:05 crc kubenswrapper[4745]: I0127 12:31:05.927357 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-dmd65"] Jan 27 12:31:05 crc kubenswrapper[4745]: I0127 12:31:05.974471 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-hvr5f"] Jan 27 12:31:05 crc kubenswrapper[4745]: I0127 12:31:05.980992 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-d6b8bcbc9-fx8bq"] Jan 27 12:31:05 crc kubenswrapper[4745]: I0127 12:31:05.983953 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-9w75r" event={"ID":"8b457f32-c7cd-4113-b4a7-d4e06bc578d3","Type":"ContainerStarted","Data":"97dee02c92e7805a2f0014e6ce1a76502a291a7015f04af9d0034d0f516a1d3f"} Jan 27 12:31:05 crc kubenswrapper[4745]: I0127 12:31:05.988003 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-d6b8bcbc9-fx8bq" event={"ID":"b78df1ec-2307-490b-bf7a-4729381c9b9e","Type":"ContainerStarted","Data":"26c82cc3ae14f79d2d058ed8fb9c2e02aeb007ce60129ec0c70b8938cfaf6664"} Jan 27 12:31:05 crc kubenswrapper[4745]: I0127 12:31:05.990051 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x6r29" event={"ID":"c1aa3726-fa0d-487f-b9c4-813b0a72924c","Type":"ContainerStarted","Data":"d31b7bcfc5bb72eab235c43c4f5024edb1b7f5ceff75860112bdd8eff0caa836"} Jan 27 12:31:05 crc kubenswrapper[4745]: I0127 12:31:05.991431 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-fbd766fb6-57d5j" event={"ID":"ec491b6d-0c60-419b-950f-d91af37597a3","Type":"ContainerStarted","Data":"eabe5fa9c0949cb33279f00c74b89589866a77f85bf8e6799914dc450d366708"} Jan 27 12:31:05 crc kubenswrapper[4745]: I0127 12:31:05.993677 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-dmd65" event={"ID":"cc8f3584-bf19-41e4-837a-13afabf31909","Type":"ContainerStarted","Data":"cce644570426b25666604b68e71acbe0ed2d32c61b71ca2d0e377767c882d38c"} Jan 27 12:31:05 crc kubenswrapper[4745]: I0127 12:31:05.997531 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-hvr5f" event={"ID":"837948f6-a7b7-4895-bc90-c87cce695f25","Type":"ContainerStarted","Data":"d9f6bda64f5c3b6b0ab95ca886a7ac60b00903b78f8aa171a92c9bd5311d133b"} Jan 27 12:31:06 crc kubenswrapper[4745]: I0127 12:31:06.000983 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-zbnj5" event={"ID":"2ad946ad-ed35-48d1-96c2-5d5dd65eb01c","Type":"ContainerStarted","Data":"aeb77d0034184a03eba63ba20656c83e88e325c26fa6f1a21b9baf88c3cf28d4"} Jan 27 12:31:06 crc kubenswrapper[4745]: I0127 12:31:06.010089 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qwl6n" event={"ID":"95ef1084-25bf-4a8c-b758-f3fd81957d2b","Type":"ContainerStarted","Data":"9cd45be7d81f59970964cec3436e6d5b2b569092f544a84456e892d22393ba72"} Jan 27 12:31:06 crc kubenswrapper[4745]: I0127 12:31:06.210979 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-kdgj4"] Jan 27 12:31:06 crc kubenswrapper[4745]: I0127 12:31:06.233028 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xhm9d"] Jan 27 12:31:06 crc kubenswrapper[4745]: I0127 12:31:06.244585 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-tzbs9"] Jan 27 12:31:06 crc kubenswrapper[4745]: I0127 12:31:06.257373 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-8h6mr"] Jan 27 12:31:06 crc kubenswrapper[4745]: I0127 12:31:06.267793 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-mr876"] Jan 27 12:31:06 crc kubenswrapper[4745]: W0127 12:31:06.271648 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d4083db_d2df_46b7_8e81_c7dddecc8d21.slice/crio-ff3e1217904c36abc5250982f7fac39158829f4155f01803eb986c2973978420 WatchSource:0}: Error finding container ff3e1217904c36abc5250982f7fac39158829f4155f01803eb986c2973978420: Status 404 returned error can't find the container with id ff3e1217904c36abc5250982f7fac39158829f4155f01803eb986c2973978420 Jan 27 12:31:06 crc kubenswrapper[4745]: I0127 12:31:06.274562 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-hg5pv"] Jan 27 12:31:06 crc kubenswrapper[4745]: W0127 12:31:06.317574 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod077960c7_14c3_4cc0_8760_681a5e59dd07.slice/crio-fe11bf8fda203bdd94711668fb1a3abd17a64df81e179303e1491f4972c1b1c6 WatchSource:0}: Error finding container fe11bf8fda203bdd94711668fb1a3abd17a64df81e179303e1491f4972c1b1c6: Status 404 returned error can't find the container with id fe11bf8fda203bdd94711668fb1a3abd17a64df81e179303e1491f4972c1b1c6 Jan 27 12:31:06 crc kubenswrapper[4745]: W0127 12:31:06.321776 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod689ac5a4_566b_41df_9c90_d6f7734a2d79.slice/crio-7bc4f5c8e07428bf27455c8eb3d3efb4aca6e271c1eaa3584a639d546e047191 WatchSource:0}: Error finding container 7bc4f5c8e07428bf27455c8eb3d3efb4aca6e271c1eaa3584a639d546e047191: Status 404 returned error can't find the container with id 7bc4f5c8e07428bf27455c8eb3d3efb4aca6e271c1eaa3584a639d546e047191 Jan 27 12:31:06 crc kubenswrapper[4745]: E0127 12:31:06.337722 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fmdcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-xhm9d_openstack-operators(689ac5a4-566b-41df-9c90-d6f7734a2d79): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 12:31:06 crc kubenswrapper[4745]: E0127 12:31:06.340623 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xhm9d" podUID="689ac5a4-566b-41df-9c90-d6f7734a2d79" Jan 27 12:31:06 crc kubenswrapper[4745]: I0127 12:31:06.898606 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca2fa659-fb2b-446c-833d-78a0314a8059-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-58k9b\" (UID: \"ca2fa659-fb2b-446c-833d-78a0314a8059\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-58k9b" Jan 27 12:31:06 crc kubenswrapper[4745]: E0127 12:31:06.898931 4745 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 12:31:06 crc kubenswrapper[4745]: E0127 12:31:06.899076 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca2fa659-fb2b-446c-833d-78a0314a8059-cert podName:ca2fa659-fb2b-446c-833d-78a0314a8059 nodeName:}" failed. No retries permitted until 2026-01-27 12:31:10.899041199 +0000 UTC m=+1163.703952067 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ca2fa659-fb2b-446c-833d-78a0314a8059-cert") pod "infra-operator-controller-manager-7d75bc88d5-58k9b" (UID: "ca2fa659-fb2b-446c-833d-78a0314a8059") : secret "infra-operator-webhook-server-cert" not found Jan 27 12:31:07 crc kubenswrapper[4745]: I0127 12:31:07.029766 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xhm9d" event={"ID":"689ac5a4-566b-41df-9c90-d6f7734a2d79","Type":"ContainerStarted","Data":"7bc4f5c8e07428bf27455c8eb3d3efb4aca6e271c1eaa3584a639d546e047191"} Jan 27 12:31:07 crc kubenswrapper[4745]: E0127 12:31:07.031889 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xhm9d" podUID="689ac5a4-566b-41df-9c90-d6f7734a2d79" Jan 27 12:31:07 crc kubenswrapper[4745]: I0127 12:31:07.037034 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-mr876" event={"ID":"077960c7-14c3-4cc0-8760-681a5e59dd07","Type":"ContainerStarted","Data":"fe11bf8fda203bdd94711668fb1a3abd17a64df81e179303e1491f4972c1b1c6"} Jan 27 12:31:07 crc kubenswrapper[4745]: I0127 12:31:07.040949 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8h6mr" event={"ID":"3339982a-d5be-4486-b767-127e2873d450","Type":"ContainerStarted","Data":"856d97cc5c78e31d39e6dc9881a835b515b54769f9d8ebc2faacabc14f2164ee"} Jan 27 12:31:07 crc kubenswrapper[4745]: I0127 12:31:07.043908 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kdgj4" event={"ID":"3d4083db-d2df-46b7-8e81-c7dddecc8d21","Type":"ContainerStarted","Data":"ff3e1217904c36abc5250982f7fac39158829f4155f01803eb986c2973978420"} Jan 27 12:31:07 crc kubenswrapper[4745]: I0127 12:31:07.047903 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-hg5pv" event={"ID":"49e5ed64-890e-430d-a177-df3309fb625c","Type":"ContainerStarted","Data":"0789727e729c4f369869bc8a8ce18ab0c1658f0a140fcafb2cbd762efbf7c1eb"} Jan 27 12:31:07 crc kubenswrapper[4745]: I0127 12:31:07.051497 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-tzbs9" event={"ID":"a2316a86-a910-42cd-810f-390a7c26e2e9","Type":"ContainerStarted","Data":"3d13a1021db760d1403deb02db35cce3c2ddd3f5421c94eac383c0f9981fc1b3"} Jan 27 12:31:07 crc kubenswrapper[4745]: I0127 12:31:07.205164 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fe12909-b3c6-43a8-8c28-1e2e6dd7958f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2\" (UID: \"4fe12909-b3c6-43a8-8c28-1e2e6dd7958f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" Jan 27 12:31:07 crc kubenswrapper[4745]: E0127 12:31:07.209231 4745 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 12:31:07 crc kubenswrapper[4745]: E0127 12:31:07.209486 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4fe12909-b3c6-43a8-8c28-1e2e6dd7958f-cert podName:4fe12909-b3c6-43a8-8c28-1e2e6dd7958f nodeName:}" failed. No retries permitted until 2026-01-27 12:31:11.209300311 +0000 UTC m=+1164.014210999 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4fe12909-b3c6-43a8-8c28-1e2e6dd7958f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" (UID: "4fe12909-b3c6-43a8-8c28-1e2e6dd7958f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 12:31:07 crc kubenswrapper[4745]: I0127 12:31:07.615253 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-metrics-certs\") pod \"openstack-operator-controller-manager-96bd7847-d5vm4\" (UID: \"e189eaea-4a00-43c3-b92e-36d10aa9b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:07 crc kubenswrapper[4745]: I0127 12:31:07.615394 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-webhook-certs\") pod \"openstack-operator-controller-manager-96bd7847-d5vm4\" (UID: \"e189eaea-4a00-43c3-b92e-36d10aa9b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:07 crc kubenswrapper[4745]: E0127 12:31:07.615436 4745 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 12:31:07 crc kubenswrapper[4745]: E0127 12:31:07.615494 4745 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 12:31:07 crc kubenswrapper[4745]: E0127 12:31:07.615518 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-metrics-certs podName:e189eaea-4a00-43c3-b92e-36d10aa9b6d1 nodeName:}" failed. No retries permitted until 2026-01-27 12:31:11.615498261 +0000 UTC m=+1164.420408949 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-metrics-certs") pod "openstack-operator-controller-manager-96bd7847-d5vm4" (UID: "e189eaea-4a00-43c3-b92e-36d10aa9b6d1") : secret "metrics-server-cert" not found Jan 27 12:31:07 crc kubenswrapper[4745]: E0127 12:31:07.615543 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-webhook-certs podName:e189eaea-4a00-43c3-b92e-36d10aa9b6d1 nodeName:}" failed. No retries permitted until 2026-01-27 12:31:11.615529812 +0000 UTC m=+1164.420440510 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-webhook-certs") pod "openstack-operator-controller-manager-96bd7847-d5vm4" (UID: "e189eaea-4a00-43c3-b92e-36d10aa9b6d1") : secret "webhook-server-cert" not found Jan 27 12:31:08 crc kubenswrapper[4745]: E0127 12:31:08.061747 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xhm9d" podUID="689ac5a4-566b-41df-9c90-d6f7734a2d79" Jan 27 12:31:10 crc kubenswrapper[4745]: I0127 12:31:10.928554 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca2fa659-fb2b-446c-833d-78a0314a8059-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-58k9b\" (UID: \"ca2fa659-fb2b-446c-833d-78a0314a8059\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-58k9b" Jan 27 12:31:10 crc kubenswrapper[4745]: E0127 12:31:10.928777 4745 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 12:31:10 crc kubenswrapper[4745]: E0127 12:31:10.929190 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca2fa659-fb2b-446c-833d-78a0314a8059-cert podName:ca2fa659-fb2b-446c-833d-78a0314a8059 nodeName:}" failed. No retries permitted until 2026-01-27 12:31:18.929165523 +0000 UTC m=+1171.734076211 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ca2fa659-fb2b-446c-833d-78a0314a8059-cert") pod "infra-operator-controller-manager-7d75bc88d5-58k9b" (UID: "ca2fa659-fb2b-446c-833d-78a0314a8059") : secret "infra-operator-webhook-server-cert" not found Jan 27 12:31:11 crc kubenswrapper[4745]: I0127 12:31:11.233194 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fe12909-b3c6-43a8-8c28-1e2e6dd7958f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2\" (UID: \"4fe12909-b3c6-43a8-8c28-1e2e6dd7958f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" Jan 27 12:31:11 crc kubenswrapper[4745]: E0127 12:31:11.233344 4745 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 12:31:11 crc kubenswrapper[4745]: E0127 12:31:11.233579 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4fe12909-b3c6-43a8-8c28-1e2e6dd7958f-cert podName:4fe12909-b3c6-43a8-8c28-1e2e6dd7958f nodeName:}" failed. No retries permitted until 2026-01-27 12:31:19.233560795 +0000 UTC m=+1172.038471483 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4fe12909-b3c6-43a8-8c28-1e2e6dd7958f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" (UID: "4fe12909-b3c6-43a8-8c28-1e2e6dd7958f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 12:31:11 crc kubenswrapper[4745]: I0127 12:31:11.642904 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-webhook-certs\") pod \"openstack-operator-controller-manager-96bd7847-d5vm4\" (UID: \"e189eaea-4a00-43c3-b92e-36d10aa9b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:11 crc kubenswrapper[4745]: E0127 12:31:11.643044 4745 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 12:31:11 crc kubenswrapper[4745]: E0127 12:31:11.643130 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-webhook-certs podName:e189eaea-4a00-43c3-b92e-36d10aa9b6d1 nodeName:}" failed. No retries permitted until 2026-01-27 12:31:19.643107142 +0000 UTC m=+1172.448017890 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-webhook-certs") pod "openstack-operator-controller-manager-96bd7847-d5vm4" (UID: "e189eaea-4a00-43c3-b92e-36d10aa9b6d1") : secret "webhook-server-cert" not found Jan 27 12:31:11 crc kubenswrapper[4745]: I0127 12:31:11.643658 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-metrics-certs\") pod \"openstack-operator-controller-manager-96bd7847-d5vm4\" (UID: \"e189eaea-4a00-43c3-b92e-36d10aa9b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:11 crc kubenswrapper[4745]: E0127 12:31:11.643847 4745 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 12:31:11 crc kubenswrapper[4745]: E0127 12:31:11.643914 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-metrics-certs podName:e189eaea-4a00-43c3-b92e-36d10aa9b6d1 nodeName:}" failed. No retries permitted until 2026-01-27 12:31:19.643891865 +0000 UTC m=+1172.448802633 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-metrics-certs") pod "openstack-operator-controller-manager-96bd7847-d5vm4" (UID: "e189eaea-4a00-43c3-b92e-36d10aa9b6d1") : secret "metrics-server-cert" not found Jan 27 12:31:18 crc kubenswrapper[4745]: I0127 12:31:18.950048 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca2fa659-fb2b-446c-833d-78a0314a8059-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-58k9b\" (UID: \"ca2fa659-fb2b-446c-833d-78a0314a8059\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-58k9b" Jan 27 12:31:18 crc kubenswrapper[4745]: I0127 12:31:18.958220 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca2fa659-fb2b-446c-833d-78a0314a8059-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-58k9b\" (UID: \"ca2fa659-fb2b-446c-833d-78a0314a8059\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-58k9b" Jan 27 12:31:19 crc kubenswrapper[4745]: I0127 12:31:19.217452 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-58k9b" Jan 27 12:31:19 crc kubenswrapper[4745]: I0127 12:31:19.254909 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fe12909-b3c6-43a8-8c28-1e2e6dd7958f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2\" (UID: \"4fe12909-b3c6-43a8-8c28-1e2e6dd7958f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" Jan 27 12:31:19 crc kubenswrapper[4745]: I0127 12:31:19.258956 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fe12909-b3c6-43a8-8c28-1e2e6dd7958f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2\" (UID: \"4fe12909-b3c6-43a8-8c28-1e2e6dd7958f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" Jan 27 12:31:19 crc kubenswrapper[4745]: I0127 12:31:19.314361 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" Jan 27 12:31:19 crc kubenswrapper[4745]: I0127 12:31:19.662201 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-metrics-certs\") pod \"openstack-operator-controller-manager-96bd7847-d5vm4\" (UID: \"e189eaea-4a00-43c3-b92e-36d10aa9b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:19 crc kubenswrapper[4745]: I0127 12:31:19.662351 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-webhook-certs\") pod \"openstack-operator-controller-manager-96bd7847-d5vm4\" (UID: \"e189eaea-4a00-43c3-b92e-36d10aa9b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:19 crc kubenswrapper[4745]: I0127 12:31:19.668383 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-webhook-certs\") pod \"openstack-operator-controller-manager-96bd7847-d5vm4\" (UID: \"e189eaea-4a00-43c3-b92e-36d10aa9b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:19 crc kubenswrapper[4745]: I0127 12:31:19.670761 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e189eaea-4a00-43c3-b92e-36d10aa9b6d1-metrics-certs\") pod \"openstack-operator-controller-manager-96bd7847-d5vm4\" (UID: \"e189eaea-4a00-43c3-b92e-36d10aa9b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:19 crc kubenswrapper[4745]: I0127 12:31:19.694283 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:34 crc kubenswrapper[4745]: E0127 12:31:34.200874 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20" Jan 27 12:31:34 crc kubenswrapper[4745]: E0127 12:31:34.201880 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s9vfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-799bc87c89-zbnj5_openstack-operators(2ad946ad-ed35-48d1-96c2-5d5dd65eb01c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 12:31:34 crc kubenswrapper[4745]: E0127 12:31:34.203345 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-zbnj5" podUID="2ad946ad-ed35-48d1-96c2-5d5dd65eb01c" Jan 27 12:31:34 crc kubenswrapper[4745]: E0127 12:31:34.233778 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-zbnj5" podUID="2ad946ad-ed35-48d1-96c2-5d5dd65eb01c" Jan 27 12:31:36 crc kubenswrapper[4745]: E0127 12:31:36.927398 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84" Jan 27 12:31:36 crc kubenswrapper[4745]: E0127 12:31:36.928187 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qqd8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-849fcfbb6b-hvr5f_openstack-operators(837948f6-a7b7-4895-bc90-c87cce695f25): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 12:31:36 crc kubenswrapper[4745]: E0127 12:31:36.929836 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-hvr5f" podUID="837948f6-a7b7-4895-bc90-c87cce695f25" Jan 27 12:31:37 crc kubenswrapper[4745]: E0127 12:31:37.158445 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327" Jan 27 12:31:37 crc kubenswrapper[4745]: E0127 12:31:37.158609 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kltzn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-8h6mr_openstack-operators(3339982a-d5be-4486-b767-127e2873d450): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 12:31:37 crc kubenswrapper[4745]: E0127 12:31:37.160002 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8h6mr" podUID="3339982a-d5be-4486-b767-127e2873d450" Jan 27 12:31:37 crc kubenswrapper[4745]: E0127 12:31:37.251618 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84\\\"\"" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-hvr5f" podUID="837948f6-a7b7-4895-bc90-c87cce695f25" Jan 27 12:31:37 crc kubenswrapper[4745]: E0127 12:31:37.252472 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8h6mr" podUID="3339982a-d5be-4486-b767-127e2873d450" Jan 27 12:31:37 crc kubenswrapper[4745]: E0127 12:31:37.822161 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 27 12:31:37 crc kubenswrapper[4745]: E0127 12:31:37.822361 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kbrl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-kfdwp_openstack-operators(f11a179f-d8d9-4a2b-bce5-5319a44efdb0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 12:31:37 crc kubenswrapper[4745]: E0127 12:31:37.824339 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kfdwp" podUID="f11a179f-d8d9-4a2b-bce5-5319a44efdb0" Jan 27 12:31:38 crc kubenswrapper[4745]: E0127 12:31:38.262000 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kfdwp" podUID="f11a179f-d8d9-4a2b-bce5-5319a44efdb0" Jan 27 12:31:39 crc kubenswrapper[4745]: E0127 12:31:39.810887 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/heat-operator@sha256:027f3118543388d561b452a9777783b1f866ffaf59d9a1b16a225b1c5636111f" Jan 27 12:31:39 crc kubenswrapper[4745]: E0127 12:31:39.811126 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/heat-operator@sha256:027f3118543388d561b452a9777783b1f866ffaf59d9a1b16a225b1c5636111f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t4pt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-575ffb885b-7jg6g_openstack-operators(6e5cee05-93a0-415b-b0f8-12187035f0e0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 12:31:39 crc kubenswrapper[4745]: E0127 12:31:39.812360 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-7jg6g" podUID="6e5cee05-93a0-415b-b0f8-12187035f0e0" Jan 27 12:31:40 crc kubenswrapper[4745]: E0127 12:31:40.276748 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/heat-operator@sha256:027f3118543388d561b452a9777783b1f866ffaf59d9a1b16a225b1c5636111f\\\"\"" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-7jg6g" podUID="6e5cee05-93a0-415b-b0f8-12187035f0e0" Jan 27 12:31:40 crc kubenswrapper[4745]: E0127 12:31:40.728919 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/designate-operator@sha256:d26a32730ba8b64e98f68194bd1a766aadc942392b24fa6a2cf1c136969dd99f" Jan 27 12:31:40 crc kubenswrapper[4745]: E0127 12:31:40.729204 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/designate-operator@sha256:d26a32730ba8b64e98f68194bd1a766aadc942392b24fa6a2cf1c136969dd99f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p9n2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-77554cdc5c-g429m_openstack-operators(5dcdc404-8271-4f68-ab3e-b2158e959c6a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 12:31:40 crc kubenswrapper[4745]: E0127 12:31:40.731367 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-g429m" podUID="5dcdc404-8271-4f68-ab3e-b2158e959c6a" Jan 27 12:31:41 crc kubenswrapper[4745]: E0127 12:31:41.286963 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/designate-operator@sha256:d26a32730ba8b64e98f68194bd1a766aadc942392b24fa6a2cf1c136969dd99f\\\"\"" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-g429m" podUID="5dcdc404-8271-4f68-ab3e-b2158e959c6a" Jan 27 12:31:41 crc kubenswrapper[4745]: E0127 12:31:41.405491 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/cinder-operator@sha256:7619b8e8814c4d22fcdcc392cdaba2ce279d356fc9263275c91acfba86533591" Jan 27 12:31:41 crc kubenswrapper[4745]: E0127 12:31:41.405713 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/cinder-operator@sha256:7619b8e8814c4d22fcdcc392cdaba2ce279d356fc9263275c91acfba86533591,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gwmwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-655bf9cfbb-kdbm9_openstack-operators(7fa2cf33-1cec-4874-8e41-090f3bd0f550): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 12:31:41 crc kubenswrapper[4745]: E0127 12:31:41.406976 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-kdbm9" podUID="7fa2cf33-1cec-4874-8e41-090f3bd0f550" Jan 27 12:31:42 crc kubenswrapper[4745]: E0127 12:31:42.288584 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/cinder-operator@sha256:7619b8e8814c4d22fcdcc392cdaba2ce279d356fc9263275c91acfba86533591\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-kdbm9" podUID="7fa2cf33-1cec-4874-8e41-090f3bd0f550" Jan 27 12:31:42 crc kubenswrapper[4745]: E0127 12:31:42.353342 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/glance-operator@sha256:bc45409dff26aca6bd982684cfaf093548adb6a71928f5257fe60ab5535dda39" Jan 27 12:31:42 crc kubenswrapper[4745]: E0127 12:31:42.353517 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/glance-operator@sha256:bc45409dff26aca6bd982684cfaf093548adb6a71928f5257fe60ab5535dda39,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pr66b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-67dd55ff59-78zrk_openstack-operators(1268d1f9-be48-4d61-8750-d941d0699718): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 12:31:42 crc kubenswrapper[4745]: E0127 12:31:42.354863 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-78zrk" podUID="1268d1f9-be48-4d61-8750-d941d0699718" Jan 27 12:31:43 crc kubenswrapper[4745]: E0127 12:31:43.023075 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 27 12:31:43 crc kubenswrapper[4745]: E0127 12:31:43.023279 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l9t9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-kdgj4_openstack-operators(3d4083db-d2df-46b7-8e81-c7dddecc8d21): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 12:31:43 crc kubenswrapper[4745]: E0127 12:31:43.024497 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kdgj4" podUID="3d4083db-d2df-46b7-8e81-c7dddecc8d21" Jan 27 12:31:43 crc kubenswrapper[4745]: E0127 12:31:43.295180 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/glance-operator@sha256:bc45409dff26aca6bd982684cfaf093548adb6a71928f5257fe60ab5535dda39\\\"\"" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-78zrk" podUID="1268d1f9-be48-4d61-8750-d941d0699718" Jan 27 12:31:43 crc kubenswrapper[4745]: E0127 12:31:43.295905 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kdgj4" podUID="3d4083db-d2df-46b7-8e81-c7dddecc8d21" Jan 27 12:31:43 crc kubenswrapper[4745]: E0127 12:31:43.938875 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 27 12:31:43 crc kubenswrapper[4745]: E0127 12:31:43.939060 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v2ln7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-hg5pv_openstack-operators(49e5ed64-890e-430d-a177-df3309fb625c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 12:31:43 crc kubenswrapper[4745]: E0127 12:31:43.940204 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-hg5pv" podUID="49e5ed64-890e-430d-a177-df3309fb625c" Jan 27 12:31:44 crc kubenswrapper[4745]: E0127 12:31:44.301718 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-hg5pv" podUID="49e5ed64-890e-430d-a177-df3309fb625c" Jan 27 12:31:44 crc kubenswrapper[4745]: E0127 12:31:44.829307 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569" Jan 27 12:31:44 crc kubenswrapper[4745]: E0127 12:31:44.829503 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-llzg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7ffd8d76d4-mr876_openstack-operators(077960c7-14c3-4cc0-8760-681a5e59dd07): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 12:31:44 crc kubenswrapper[4745]: E0127 12:31:44.830688 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-mr876" podUID="077960c7-14c3-4cc0-8760-681a5e59dd07" Jan 27 12:31:45 crc kubenswrapper[4745]: E0127 12:31:45.306502 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-mr876" podUID="077960c7-14c3-4cc0-8760-681a5e59dd07" Jan 27 12:31:45 crc kubenswrapper[4745]: E0127 12:31:45.743344 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/octavia-operator@sha256:bb8d23f38682e4b987b621a3116500a76d0dc380a1bfb9ea77f18dfacdee4f49" Jan 27 12:31:45 crc kubenswrapper[4745]: E0127 12:31:45.743561 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:bb8d23f38682e4b987b621a3116500a76d0dc380a1bfb9ea77f18dfacdee4f49,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sqd96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7875d7675-tzbs9_openstack-operators(a2316a86-a910-42cd-810f-390a7c26e2e9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 12:31:45 crc kubenswrapper[4745]: E0127 12:31:45.744762 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-tzbs9" podUID="a2316a86-a910-42cd-810f-390a7c26e2e9" Jan 27 12:31:46 crc kubenswrapper[4745]: E0127 12:31:46.312984 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:bb8d23f38682e4b987b621a3116500a76d0dc380a1bfb9ea77f18dfacdee4f49\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-tzbs9" podUID="a2316a86-a910-42cd-810f-390a7c26e2e9" Jan 27 12:31:46 crc kubenswrapper[4745]: E0127 12:31:46.471073 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487" Jan 27 12:31:46 crc kubenswrapper[4745]: E0127 12:31:46.471284 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2gmzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-55f684fd56-dmd65_openstack-operators(cc8f3584-bf19-41e4-837a-13afabf31909): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 12:31:46 crc kubenswrapper[4745]: E0127 12:31:46.472507 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-dmd65" podUID="cc8f3584-bf19-41e4-837a-13afabf31909" Jan 27 12:31:47 crc kubenswrapper[4745]: E0127 12:31:47.019129 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d" Jan 27 12:31:47 crc kubenswrapper[4745]: E0127 12:31:47.019600 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h4hvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-qwl6n_openstack-operators(95ef1084-25bf-4a8c-b758-f3fd81957d2b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 12:31:47 crc kubenswrapper[4745]: E0127 12:31:47.020725 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qwl6n" podUID="95ef1084-25bf-4a8c-b758-f3fd81957d2b" Jan 27 12:31:47 crc kubenswrapper[4745]: E0127 12:31:47.326908 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-dmd65" podUID="cc8f3584-bf19-41e4-837a-13afabf31909" Jan 27 12:31:47 crc kubenswrapper[4745]: E0127 12:31:47.326919 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qwl6n" podUID="95ef1084-25bf-4a8c-b758-f3fd81957d2b" Jan 27 12:31:47 crc kubenswrapper[4745]: I0127 12:31:47.333867 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-58k9b"] Jan 27 12:31:49 crc kubenswrapper[4745]: E0127 12:31:49.117755 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.107:5001/openstack-k8s-operators/watcher-operator:aab65790c36d6d2f108d7a1a628e49bafe0a749e" Jan 27 12:31:49 crc kubenswrapper[4745]: E0127 12:31:49.118079 4745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.107:5001/openstack-k8s-operators/watcher-operator:aab65790c36d6d2f108d7a1a628e49bafe0a749e" Jan 27 12:31:49 crc kubenswrapper[4745]: E0127 12:31:49.118227 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.107:5001/openstack-k8s-operators/watcher-operator:aab65790c36d6d2f108d7a1a628e49bafe0a749e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8wz9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-d6b8bcbc9-fx8bq_openstack-operators(b78df1ec-2307-490b-bf7a-4729381c9b9e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 12:31:49 crc kubenswrapper[4745]: E0127 12:31:49.119490 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-d6b8bcbc9-fx8bq" podUID="b78df1ec-2307-490b-bf7a-4729381c9b9e" Jan 27 12:31:49 crc kubenswrapper[4745]: E0127 12:31:49.336571 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.107:5001/openstack-k8s-operators/watcher-operator:aab65790c36d6d2f108d7a1a628e49bafe0a749e\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-d6b8bcbc9-fx8bq" podUID="b78df1ec-2307-490b-bf7a-4729381c9b9e" Jan 27 12:31:49 crc kubenswrapper[4745]: W0127 12:31:49.696066 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca2fa659_fb2b_446c_833d_78a0314a8059.slice/crio-ec95b2e48d5e58a381d40b6c8b0cb09ed9c84abd202ff058989fc11ee16be014 WatchSource:0}: Error finding container ec95b2e48d5e58a381d40b6c8b0cb09ed9c84abd202ff058989fc11ee16be014: Status 404 returned error can't find the container with id ec95b2e48d5e58a381d40b6c8b0cb09ed9c84abd202ff058989fc11ee16be014 Jan 27 12:31:49 crc kubenswrapper[4745]: E0127 12:31:49.826000 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:123ea3339f27822e161e5fa113f4c3ecbd8348533cf3067b43ebf32874eb46cc" Jan 27 12:31:49 crc kubenswrapper[4745]: E0127 12:31:49.826391 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:123ea3339f27822e161e5fa113f4c3ecbd8348533cf3067b43ebf32874eb46cc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s6bxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-fbd766fb6-57d5j_openstack-operators(ec491b6d-0c60-419b-950f-d91af37597a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 12:31:49 crc kubenswrapper[4745]: E0127 12:31:49.828333 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-fbd766fb6-57d5j" podUID="ec491b6d-0c60-419b-950f-d91af37597a3" Jan 27 12:31:50 crc kubenswrapper[4745]: I0127 12:31:50.342620 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-58k9b" event={"ID":"ca2fa659-fb2b-446c-833d-78a0314a8059","Type":"ContainerStarted","Data":"ec95b2e48d5e58a381d40b6c8b0cb09ed9c84abd202ff058989fc11ee16be014"} Jan 27 12:31:50 crc kubenswrapper[4745]: E0127 12:31:50.344038 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:123ea3339f27822e161e5fa113f4c3ecbd8348533cf3067b43ebf32874eb46cc\\\"\"" pod="openstack-operators/nova-operator-controller-manager-fbd766fb6-57d5j" podUID="ec491b6d-0c60-419b-950f-d91af37597a3" Jan 27 12:31:50 crc kubenswrapper[4745]: E0127 12:31:50.912171 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 27 12:31:50 crc kubenswrapper[4745]: E0127 12:31:50.912523 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fmdcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-xhm9d_openstack-operators(689ac5a4-566b-41df-9c90-d6f7734a2d79): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 12:31:50 crc kubenswrapper[4745]: E0127 12:31:50.913852 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xhm9d" podUID="689ac5a4-566b-41df-9c90-d6f7734a2d79" Jan 27 12:31:50 crc kubenswrapper[4745]: I0127 12:31:50.927483 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2"] Jan 27 12:31:50 crc kubenswrapper[4745]: W0127 12:31:50.930652 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fe12909_b3c6_43a8_8c28_1e2e6dd7958f.slice/crio-c4c426832a6946cbe7781a84a4e54d277276d25ae0d779c705e2828f9111fd84 WatchSource:0}: Error finding container c4c426832a6946cbe7781a84a4e54d277276d25ae0d779c705e2828f9111fd84: Status 404 returned error can't find the container with id c4c426832a6946cbe7781a84a4e54d277276d25ae0d779c705e2828f9111fd84 Jan 27 12:31:51 crc kubenswrapper[4745]: I0127 12:31:51.076772 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4"] Jan 27 12:31:51 crc kubenswrapper[4745]: W0127 12:31:51.082763 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode189eaea_4a00_43c3_b92e_36d10aa9b6d1.slice/crio-a061427e09bc6f26f4f82d63d9098d45a71615e94d38847967e9b01d438796a5 WatchSource:0}: Error finding container a061427e09bc6f26f4f82d63d9098d45a71615e94d38847967e9b01d438796a5: Status 404 returned error can't find the container with id a061427e09bc6f26f4f82d63d9098d45a71615e94d38847967e9b01d438796a5 Jan 27 12:31:51 crc kubenswrapper[4745]: I0127 12:31:51.357903 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-ptkxh" event={"ID":"a545817b-adaf-4966-8472-4a599db84913","Type":"ContainerStarted","Data":"baac3cf8680b0d5876f2343679275924dec0d1ac8723855a2116d8a5d47bf471"} Jan 27 12:31:51 crc kubenswrapper[4745]: I0127 12:31:51.358236 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-ptkxh" Jan 27 12:31:51 crc kubenswrapper[4745]: I0127 12:31:51.359112 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" event={"ID":"4fe12909-b3c6-43a8-8c28-1e2e6dd7958f","Type":"ContainerStarted","Data":"c4c426832a6946cbe7781a84a4e54d277276d25ae0d779c705e2828f9111fd84"} Jan 27 12:31:51 crc kubenswrapper[4745]: I0127 12:31:51.360377 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-9w75r" event={"ID":"8b457f32-c7cd-4113-b4a7-d4e06bc578d3","Type":"ContainerStarted","Data":"50ffc460ff200bef0186c5daac7b67547f1e2f031eb3ff014327bce3c4017e52"} Jan 27 12:31:51 crc kubenswrapper[4745]: I0127 12:31:51.360477 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-9w75r" Jan 27 12:31:51 crc kubenswrapper[4745]: I0127 12:31:51.361760 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x6r29" event={"ID":"c1aa3726-fa0d-487f-b9c4-813b0a72924c","Type":"ContainerStarted","Data":"ec8e78fc57280e936f548462bfa3082d741cb59fce4538f8ea9e6ba5e1f5241f"} Jan 27 12:31:51 crc kubenswrapper[4745]: I0127 12:31:51.362027 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x6r29" Jan 27 12:31:51 crc kubenswrapper[4745]: I0127 12:31:51.363550 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" event={"ID":"e189eaea-4a00-43c3-b92e-36d10aa9b6d1","Type":"ContainerStarted","Data":"a061427e09bc6f26f4f82d63d9098d45a71615e94d38847967e9b01d438796a5"} Jan 27 12:31:51 crc kubenswrapper[4745]: I0127 12:31:51.379716 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-ptkxh" podStartSLOduration=6.498059366 podStartE2EDuration="49.379691616s" podCreationTimestamp="2026-01-27 12:31:02 +0000 UTC" firstStartedPulling="2026-01-27 12:31:04.01279966 +0000 UTC m=+1156.817710348" lastFinishedPulling="2026-01-27 12:31:46.89443191 +0000 UTC m=+1199.699342598" observedRunningTime="2026-01-27 12:31:51.372273542 +0000 UTC m=+1204.177184230" watchObservedRunningTime="2026-01-27 12:31:51.379691616 +0000 UTC m=+1204.184602304" Jan 27 12:31:51 crc kubenswrapper[4745]: I0127 12:31:51.390231 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-9w75r" podStartSLOduration=8.380306597 podStartE2EDuration="49.39019968s" podCreationTimestamp="2026-01-27 12:31:02 +0000 UTC" firstStartedPulling="2026-01-27 12:31:05.886460133 +0000 UTC m=+1158.691370821" lastFinishedPulling="2026-01-27 12:31:46.896353216 +0000 UTC m=+1199.701263904" observedRunningTime="2026-01-27 12:31:51.389843629 +0000 UTC m=+1204.194754317" watchObservedRunningTime="2026-01-27 12:31:51.39019968 +0000 UTC m=+1204.195110358" Jan 27 12:31:51 crc kubenswrapper[4745]: I0127 12:31:51.425092 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x6r29" podStartSLOduration=8.115834146 podStartE2EDuration="49.425076386s" podCreationTimestamp="2026-01-27 12:31:02 +0000 UTC" firstStartedPulling="2026-01-27 12:31:05.593079508 +0000 UTC m=+1158.397990196" lastFinishedPulling="2026-01-27 12:31:46.902321748 +0000 UTC m=+1199.707232436" observedRunningTime="2026-01-27 12:31:51.418831746 +0000 UTC m=+1204.223742434" watchObservedRunningTime="2026-01-27 12:31:51.425076386 +0000 UTC m=+1204.229987074" Jan 27 12:31:52 crc kubenswrapper[4745]: I0127 12:31:52.376419 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" event={"ID":"e189eaea-4a00-43c3-b92e-36d10aa9b6d1","Type":"ContainerStarted","Data":"aaaaf86f0d60ba5f69365ec1b5ca7bf5b9b3df57dcee14b0a3311ca56d4e86ee"} Jan 27 12:31:52 crc kubenswrapper[4745]: I0127 12:31:52.377136 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:31:52 crc kubenswrapper[4745]: I0127 12:31:52.414707 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" podStartSLOduration=49.41468724 podStartE2EDuration="49.41468724s" podCreationTimestamp="2026-01-27 12:31:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 12:31:52.409606323 +0000 UTC m=+1205.214517011" watchObservedRunningTime="2026-01-27 12:31:52.41468724 +0000 UTC m=+1205.219597928" Jan 27 12:31:53 crc kubenswrapper[4745]: I0127 12:31:53.405247 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-hvr5f" event={"ID":"837948f6-a7b7-4895-bc90-c87cce695f25","Type":"ContainerStarted","Data":"0071ef685d67c5261bcb64cbaf5ca0776a2a8feb810da45f79b36061a5a64ad0"} Jan 27 12:31:53 crc kubenswrapper[4745]: I0127 12:31:53.405752 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-hvr5f" Jan 27 12:31:53 crc kubenswrapper[4745]: I0127 12:31:53.407111 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-zbnj5" event={"ID":"2ad946ad-ed35-48d1-96c2-5d5dd65eb01c","Type":"ContainerStarted","Data":"5bda44a16e308ba4e2e5fa9f9793131eea6a1791a4be202ee7fabdc0d7cf233a"} Jan 27 12:31:53 crc kubenswrapper[4745]: I0127 12:31:53.407263 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-zbnj5" Jan 27 12:31:53 crc kubenswrapper[4745]: I0127 12:31:53.409239 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8h6mr" event={"ID":"3339982a-d5be-4486-b767-127e2873d450","Type":"ContainerStarted","Data":"e058e4f846127c90cb008155d990e2a0a68e703430d478f29d32e16b775a0077"} Jan 27 12:31:53 crc kubenswrapper[4745]: I0127 12:31:53.409477 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8h6mr" Jan 27 12:31:53 crc kubenswrapper[4745]: I0127 12:31:53.432137 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-hvr5f" podStartSLOduration=4.717934004 podStartE2EDuration="51.432121266s" podCreationTimestamp="2026-01-27 12:31:02 +0000 UTC" firstStartedPulling="2026-01-27 12:31:05.90092826 +0000 UTC m=+1158.705838948" lastFinishedPulling="2026-01-27 12:31:52.615115522 +0000 UTC m=+1205.420026210" observedRunningTime="2026-01-27 12:31:53.429466189 +0000 UTC m=+1206.234376877" watchObservedRunningTime="2026-01-27 12:31:53.432121266 +0000 UTC m=+1206.237031954" Jan 27 12:31:53 crc kubenswrapper[4745]: I0127 12:31:53.462591 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8h6mr" podStartSLOduration=4.156224879 podStartE2EDuration="50.462572054s" podCreationTimestamp="2026-01-27 12:31:03 +0000 UTC" firstStartedPulling="2026-01-27 12:31:06.310677392 +0000 UTC m=+1159.115588080" lastFinishedPulling="2026-01-27 12:31:52.617024567 +0000 UTC m=+1205.421935255" observedRunningTime="2026-01-27 12:31:53.459583458 +0000 UTC m=+1206.264494146" watchObservedRunningTime="2026-01-27 12:31:53.462572054 +0000 UTC m=+1206.267482742" Jan 27 12:31:53 crc kubenswrapper[4745]: I0127 12:31:53.466121 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-zbnj5" podStartSLOduration=3.73858939 podStartE2EDuration="50.466104826s" podCreationTimestamp="2026-01-27 12:31:03 +0000 UTC" firstStartedPulling="2026-01-27 12:31:05.889573543 +0000 UTC m=+1158.694484231" lastFinishedPulling="2026-01-27 12:31:52.617088979 +0000 UTC m=+1205.421999667" observedRunningTime="2026-01-27 12:31:53.445559764 +0000 UTC m=+1206.250470452" watchObservedRunningTime="2026-01-27 12:31:53.466104826 +0000 UTC m=+1206.271015514" Jan 27 12:31:56 crc kubenswrapper[4745]: I0127 12:31:56.429381 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" event={"ID":"4fe12909-b3c6-43a8-8c28-1e2e6dd7958f","Type":"ContainerStarted","Data":"aa032f177e914a7e78ecbd71159924707e7770c53266ef745159d5d04404475f"} Jan 27 12:31:56 crc kubenswrapper[4745]: I0127 12:31:56.429979 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" Jan 27 12:31:56 crc kubenswrapper[4745]: I0127 12:31:56.432154 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kfdwp" event={"ID":"f11a179f-d8d9-4a2b-bce5-5319a44efdb0","Type":"ContainerStarted","Data":"8e6185838bbbb4cde5f95ada2da57a03a7c69768d3c227fa56c30babb768e840"} Jan 27 12:31:56 crc kubenswrapper[4745]: I0127 12:31:56.432410 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kfdwp" Jan 27 12:31:56 crc kubenswrapper[4745]: I0127 12:31:56.433639 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-78zrk" event={"ID":"1268d1f9-be48-4d61-8750-d941d0699718","Type":"ContainerStarted","Data":"e4992757d15c7c62be4218e945d01ee88067a361b9be80b1ce7329abb77871ca"} Jan 27 12:31:56 crc kubenswrapper[4745]: I0127 12:31:56.433925 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-78zrk" Jan 27 12:31:56 crc kubenswrapper[4745]: I0127 12:31:56.437225 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-7jg6g" event={"ID":"6e5cee05-93a0-415b-b0f8-12187035f0e0","Type":"ContainerStarted","Data":"aeb847f19ffeff1fae3f13657a1410555aeae839dbe4bd6f7076c48495b3f8ba"} Jan 27 12:31:56 crc kubenswrapper[4745]: I0127 12:31:56.437458 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-7jg6g" Jan 27 12:31:56 crc kubenswrapper[4745]: I0127 12:31:56.438749 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-kdbm9" event={"ID":"7fa2cf33-1cec-4874-8e41-090f3bd0f550","Type":"ContainerStarted","Data":"d70ec01ec225acc767d807074e641e271777629b0125ed59bf781f5062eee27b"} Jan 27 12:31:56 crc kubenswrapper[4745]: I0127 12:31:56.438934 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-kdbm9" Jan 27 12:31:56 crc kubenswrapper[4745]: I0127 12:31:56.440380 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-58k9b" event={"ID":"ca2fa659-fb2b-446c-833d-78a0314a8059","Type":"ContainerStarted","Data":"fd745d6cc6819cbfa44573528b189ec3a08050b86f6b7bc84642c1322de4b0cc"} Jan 27 12:31:56 crc kubenswrapper[4745]: I0127 12:31:56.440545 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-58k9b" Jan 27 12:31:56 crc kubenswrapper[4745]: I0127 12:31:56.441722 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-g429m" event={"ID":"5dcdc404-8271-4f68-ab3e-b2158e959c6a","Type":"ContainerStarted","Data":"5e2238c0b360d22159af523e95132d0b5cd6feb097ca914c10de86f2bf8840a0"} Jan 27 12:31:56 crc kubenswrapper[4745]: I0127 12:31:56.441867 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-g429m" Jan 27 12:31:56 crc kubenswrapper[4745]: I0127 12:31:56.452857 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" podStartSLOduration=48.973926041 podStartE2EDuration="53.452838454s" podCreationTimestamp="2026-01-27 12:31:03 +0000 UTC" firstStartedPulling="2026-01-27 12:31:50.932472402 +0000 UTC m=+1203.737383090" lastFinishedPulling="2026-01-27 12:31:55.411384815 +0000 UTC m=+1208.216295503" observedRunningTime="2026-01-27 12:31:56.450669252 +0000 UTC m=+1209.255579940" watchObservedRunningTime="2026-01-27 12:31:56.452838454 +0000 UTC m=+1209.257749142" Jan 27 12:31:56 crc kubenswrapper[4745]: I0127 12:31:56.494205 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kfdwp" podStartSLOduration=3.685169194 podStartE2EDuration="54.494187227s" podCreationTimestamp="2026-01-27 12:31:02 +0000 UTC" firstStartedPulling="2026-01-27 12:31:04.40058708 +0000 UTC m=+1157.205497768" lastFinishedPulling="2026-01-27 12:31:55.209605113 +0000 UTC m=+1208.014515801" observedRunningTime="2026-01-27 12:31:56.494120915 +0000 UTC m=+1209.299031603" watchObservedRunningTime="2026-01-27 12:31:56.494187227 +0000 UTC m=+1209.299097915" Jan 27 12:31:56 crc kubenswrapper[4745]: I0127 12:31:56.498371 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-58k9b" podStartSLOduration=48.98822832 podStartE2EDuration="54.498351417s" podCreationTimestamp="2026-01-27 12:31:02 +0000 UTC" firstStartedPulling="2026-01-27 12:31:49.698022974 +0000 UTC m=+1202.502933662" lastFinishedPulling="2026-01-27 12:31:55.208146071 +0000 UTC m=+1208.013056759" observedRunningTime="2026-01-27 12:31:56.474076647 +0000 UTC m=+1209.278987335" watchObservedRunningTime="2026-01-27 12:31:56.498351417 +0000 UTC m=+1209.303262105" Jan 27 12:31:56 crc kubenswrapper[4745]: I0127 12:31:56.520781 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-g429m" podStartSLOduration=4.017876324 podStartE2EDuration="54.520761014s" podCreationTimestamp="2026-01-27 12:31:02 +0000 UTC" firstStartedPulling="2026-01-27 12:31:04.564443518 +0000 UTC m=+1157.369354206" lastFinishedPulling="2026-01-27 12:31:55.067328188 +0000 UTC m=+1207.872238896" observedRunningTime="2026-01-27 12:31:56.513157735 +0000 UTC m=+1209.318068423" watchObservedRunningTime="2026-01-27 12:31:56.520761014 +0000 UTC m=+1209.325671702" Jan 27 12:31:56 crc kubenswrapper[4745]: I0127 12:31:56.537801 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-kdbm9" podStartSLOduration=3.52356633 podStartE2EDuration="54.537785655s" podCreationTimestamp="2026-01-27 12:31:02 +0000 UTC" firstStartedPulling="2026-01-27 12:31:04.40060969 +0000 UTC m=+1157.205520378" lastFinishedPulling="2026-01-27 12:31:55.414829015 +0000 UTC m=+1208.219739703" observedRunningTime="2026-01-27 12:31:56.533718858 +0000 UTC m=+1209.338629566" watchObservedRunningTime="2026-01-27 12:31:56.537785655 +0000 UTC m=+1209.342696343" Jan 27 12:31:56 crc kubenswrapper[4745]: I0127 12:31:56.551210 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-78zrk" podStartSLOduration=3.534812705 podStartE2EDuration="54.551190602s" podCreationTimestamp="2026-01-27 12:31:02 +0000 UTC" firstStartedPulling="2026-01-27 12:31:04.704090677 +0000 UTC m=+1157.509001365" lastFinishedPulling="2026-01-27 12:31:55.720468574 +0000 UTC m=+1208.525379262" observedRunningTime="2026-01-27 12:31:56.547468745 +0000 UTC m=+1209.352379453" watchObservedRunningTime="2026-01-27 12:31:56.551190602 +0000 UTC m=+1209.356101290" Jan 27 12:31:56 crc kubenswrapper[4745]: I0127 12:31:56.571698 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-7jg6g" podStartSLOduration=3.556676786 podStartE2EDuration="54.571679183s" podCreationTimestamp="2026-01-27 12:31:02 +0000 UTC" firstStartedPulling="2026-01-27 12:31:04.703758687 +0000 UTC m=+1157.508669375" lastFinishedPulling="2026-01-27 12:31:55.718761084 +0000 UTC m=+1208.523671772" observedRunningTime="2026-01-27 12:31:56.565396412 +0000 UTC m=+1209.370307110" watchObservedRunningTime="2026-01-27 12:31:56.571679183 +0000 UTC m=+1209.376589871" Jan 27 12:31:59 crc kubenswrapper[4745]: I0127 12:31:59.488957 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-hg5pv" event={"ID":"49e5ed64-890e-430d-a177-df3309fb625c","Type":"ContainerStarted","Data":"c030579b4914ee91161da4fe440f7d8b4f3ef780e1fd0bc1c8076fbe4a7eb874"} Jan 27 12:31:59 crc kubenswrapper[4745]: I0127 12:31:59.491090 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-mr876" event={"ID":"077960c7-14c3-4cc0-8760-681a5e59dd07","Type":"ContainerStarted","Data":"22ccf80fb23a7d2053b9545b559dfd18f139bfc7070c7ceb84d015d6e1d03109"} Jan 27 12:31:59 crc kubenswrapper[4745]: I0127 12:31:59.491338 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-mr876" Jan 27 12:31:59 crc kubenswrapper[4745]: I0127 12:31:59.492984 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kdgj4" event={"ID":"3d4083db-d2df-46b7-8e81-c7dddecc8d21","Type":"ContainerStarted","Data":"07b0cc386d360be34ad1781579db4b70bef6717cf4fa84bfa5006d7d0e2b257a"} Jan 27 12:31:59 crc kubenswrapper[4745]: I0127 12:31:59.493196 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kdgj4" Jan 27 12:31:59 crc kubenswrapper[4745]: I0127 12:31:59.517512 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-hg5pv" podStartSLOduration=4.147923501 podStartE2EDuration="56.517497932s" podCreationTimestamp="2026-01-27 12:31:03 +0000 UTC" firstStartedPulling="2026-01-27 12:31:06.304662569 +0000 UTC m=+1159.109573257" lastFinishedPulling="2026-01-27 12:31:58.674237 +0000 UTC m=+1211.479147688" observedRunningTime="2026-01-27 12:31:59.516274026 +0000 UTC m=+1212.321184724" watchObservedRunningTime="2026-01-27 12:31:59.517497932 +0000 UTC m=+1212.322408610" Jan 27 12:31:59 crc kubenswrapper[4745]: I0127 12:31:59.541244 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kdgj4" podStartSLOduration=4.14447401 podStartE2EDuration="56.541223196s" podCreationTimestamp="2026-01-27 12:31:03 +0000 UTC" firstStartedPulling="2026-01-27 12:31:06.280987135 +0000 UTC m=+1159.085897823" lastFinishedPulling="2026-01-27 12:31:58.677736321 +0000 UTC m=+1211.482647009" observedRunningTime="2026-01-27 12:31:59.536803609 +0000 UTC m=+1212.341714297" watchObservedRunningTime="2026-01-27 12:31:59.541223196 +0000 UTC m=+1212.346133884" Jan 27 12:31:59 crc kubenswrapper[4745]: I0127 12:31:59.577205 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-mr876" podStartSLOduration=5.225252761 podStartE2EDuration="57.577185844s" podCreationTimestamp="2026-01-27 12:31:02 +0000 UTC" firstStartedPulling="2026-01-27 12:31:06.321731701 +0000 UTC m=+1159.126642389" lastFinishedPulling="2026-01-27 12:31:58.673664774 +0000 UTC m=+1211.478575472" observedRunningTime="2026-01-27 12:31:59.562958313 +0000 UTC m=+1212.367869011" watchObservedRunningTime="2026-01-27 12:31:59.577185844 +0000 UTC m=+1212.382096532" Jan 27 12:31:59 crc kubenswrapper[4745]: I0127 12:31:59.701264 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-96bd7847-d5vm4" Jan 27 12:32:01 crc kubenswrapper[4745]: I0127 12:32:01.512677 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-tzbs9" event={"ID":"a2316a86-a910-42cd-810f-390a7c26e2e9","Type":"ContainerStarted","Data":"fc62db9331a89d7020a15c6642d76760be4eee93f92352448cdb69b31a6aec8e"} Jan 27 12:32:01 crc kubenswrapper[4745]: I0127 12:32:01.513396 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-tzbs9" Jan 27 12:32:01 crc kubenswrapper[4745]: I0127 12:32:01.516909 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qwl6n" event={"ID":"95ef1084-25bf-4a8c-b758-f3fd81957d2b","Type":"ContainerStarted","Data":"e5707619eb648c42bb89f79b43669fac453d0560c3d7707d6b7945dabb9be26e"} Jan 27 12:32:01 crc kubenswrapper[4745]: I0127 12:32:01.517512 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qwl6n" Jan 27 12:32:01 crc kubenswrapper[4745]: I0127 12:32:01.529294 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-tzbs9" podStartSLOduration=5.320142929 podStartE2EDuration="59.529278548s" podCreationTimestamp="2026-01-27 12:31:02 +0000 UTC" firstStartedPulling="2026-01-27 12:31:06.31130738 +0000 UTC m=+1159.116218068" lastFinishedPulling="2026-01-27 12:32:00.520442999 +0000 UTC m=+1213.325353687" observedRunningTime="2026-01-27 12:32:01.528091314 +0000 UTC m=+1214.333002002" watchObservedRunningTime="2026-01-27 12:32:01.529278548 +0000 UTC m=+1214.334189236" Jan 27 12:32:02 crc kubenswrapper[4745]: I0127 12:32:02.529739 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-d6b8bcbc9-fx8bq" event={"ID":"b78df1ec-2307-490b-bf7a-4729381c9b9e","Type":"ContainerStarted","Data":"f5b4d00ff260af7d3c5055cb1df3181367dee9415d9f22112c4d9e27a320b05c"} Jan 27 12:32:02 crc kubenswrapper[4745]: I0127 12:32:02.530017 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-d6b8bcbc9-fx8bq" Jan 27 12:32:02 crc kubenswrapper[4745]: I0127 12:32:02.531396 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-dmd65" event={"ID":"cc8f3584-bf19-41e4-837a-13afabf31909","Type":"ContainerStarted","Data":"9b1df704f4c39a069765189b614c46294900ac5eb4747d5a7ad1e30e2995ea89"} Jan 27 12:32:02 crc kubenswrapper[4745]: I0127 12:32:02.531761 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-dmd65" Jan 27 12:32:02 crc kubenswrapper[4745]: I0127 12:32:02.547626 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-d6b8bcbc9-fx8bq" podStartSLOduration=4.33958119 podStartE2EDuration="59.54759012s" podCreationTimestamp="2026-01-27 12:31:03 +0000 UTC" firstStartedPulling="2026-01-27 12:31:05.927366763 +0000 UTC m=+1158.732277451" lastFinishedPulling="2026-01-27 12:32:01.135375693 +0000 UTC m=+1213.940286381" observedRunningTime="2026-01-27 12:32:02.543000448 +0000 UTC m=+1215.347911146" watchObservedRunningTime="2026-01-27 12:32:02.54759012 +0000 UTC m=+1215.352500808" Jan 27 12:32:02 crc kubenswrapper[4745]: I0127 12:32:02.547953 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qwl6n" podStartSLOduration=4.888322914 podStartE2EDuration="59.547947681s" podCreationTimestamp="2026-01-27 12:31:03 +0000 UTC" firstStartedPulling="2026-01-27 12:31:05.79002666 +0000 UTC m=+1158.594937348" lastFinishedPulling="2026-01-27 12:32:00.449651427 +0000 UTC m=+1213.254562115" observedRunningTime="2026-01-27 12:32:01.548893164 +0000 UTC m=+1214.353803862" watchObservedRunningTime="2026-01-27 12:32:02.547947681 +0000 UTC m=+1215.352858369" Jan 27 12:32:02 crc kubenswrapper[4745]: I0127 12:32:02.562669 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-dmd65" podStartSLOduration=4.967052642 podStartE2EDuration="1m0.562651825s" podCreationTimestamp="2026-01-27 12:31:02 +0000 UTC" firstStartedPulling="2026-01-27 12:31:05.889556072 +0000 UTC m=+1158.694466760" lastFinishedPulling="2026-01-27 12:32:01.485155255 +0000 UTC m=+1214.290065943" observedRunningTime="2026-01-27 12:32:02.558710691 +0000 UTC m=+1215.363621389" watchObservedRunningTime="2026-01-27 12:32:02.562651825 +0000 UTC m=+1215.367562513" Jan 27 12:32:03 crc kubenswrapper[4745]: I0127 12:32:03.047235 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-ptkxh" Jan 27 12:32:03 crc kubenswrapper[4745]: I0127 12:32:03.081349 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-kdbm9" Jan 27 12:32:03 crc kubenswrapper[4745]: I0127 12:32:03.307624 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kfdwp" Jan 27 12:32:03 crc kubenswrapper[4745]: I0127 12:32:03.399669 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-g429m" Jan 27 12:32:03 crc kubenswrapper[4745]: I0127 12:32:03.480101 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-78zrk" Jan 27 12:32:03 crc kubenswrapper[4745]: I0127 12:32:03.489643 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-7jg6g" Jan 27 12:32:03 crc kubenswrapper[4745]: I0127 12:32:03.549547 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-hvr5f" Jan 27 12:32:03 crc kubenswrapper[4745]: I0127 12:32:03.623931 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-x6r29" Jan 27 12:32:03 crc kubenswrapper[4745]: I0127 12:32:03.651524 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-9w75r" Jan 27 12:32:03 crc kubenswrapper[4745]: I0127 12:32:03.690384 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8h6mr" Jan 27 12:32:03 crc kubenswrapper[4745]: I0127 12:32:03.748197 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-hg5pv" Jan 27 12:32:03 crc kubenswrapper[4745]: I0127 12:32:03.911352 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-zbnj5" Jan 27 12:32:06 crc kubenswrapper[4745]: E0127 12:32:06.076274 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xhm9d" podUID="689ac5a4-566b-41df-9c90-d6f7734a2d79" Jan 27 12:32:09 crc kubenswrapper[4745]: I0127 12:32:09.224670 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-58k9b" Jan 27 12:32:09 crc kubenswrapper[4745]: I0127 12:32:09.319927 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2" Jan 27 12:32:13 crc kubenswrapper[4745]: I0127 12:32:13.488679 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-dmd65" Jan 27 12:32:13 crc kubenswrapper[4745]: I0127 12:32:13.558803 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-mr876" Jan 27 12:32:13 crc kubenswrapper[4745]: I0127 12:32:13.650523 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-tzbs9" Jan 27 12:32:13 crc kubenswrapper[4745]: I0127 12:32:13.728291 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qwl6n" Jan 27 12:32:13 crc kubenswrapper[4745]: I0127 12:32:13.750098 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-hg5pv" Jan 27 12:32:13 crc kubenswrapper[4745]: I0127 12:32:13.951325 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kdgj4" Jan 27 12:32:13 crc kubenswrapper[4745]: I0127 12:32:13.955582 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-d6b8bcbc9-fx8bq" Jan 27 12:32:19 crc kubenswrapper[4745]: I0127 12:32:19.677730 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xhm9d" event={"ID":"689ac5a4-566b-41df-9c90-d6f7734a2d79","Type":"ContainerStarted","Data":"b94690353a94500fab1377b487c74c265d63da7b15a3bf09b4601c57dd3be7cd"} Jan 27 12:32:19 crc kubenswrapper[4745]: I0127 12:32:19.689043 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-fbd766fb6-57d5j" event={"ID":"ec491b6d-0c60-419b-950f-d91af37597a3","Type":"ContainerStarted","Data":"a3669a9da8bde79b97c0238037ab2b61330d2e8a94d64af493c90e7a6ef45713"} Jan 27 12:32:19 crc kubenswrapper[4745]: I0127 12:32:19.689536 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-fbd766fb6-57d5j" Jan 27 12:32:19 crc kubenswrapper[4745]: I0127 12:32:19.705347 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xhm9d" podStartSLOduration=4.21508611 podStartE2EDuration="1m16.705328016s" podCreationTimestamp="2026-01-27 12:31:03 +0000 UTC" firstStartedPulling="2026-01-27 12:31:06.337561628 +0000 UTC m=+1159.142472316" lastFinishedPulling="2026-01-27 12:32:18.827803504 +0000 UTC m=+1231.632714222" observedRunningTime="2026-01-27 12:32:19.699216211 +0000 UTC m=+1232.504126899" watchObservedRunningTime="2026-01-27 12:32:19.705328016 +0000 UTC m=+1232.510238694" Jan 27 12:32:19 crc kubenswrapper[4745]: I0127 12:32:19.721999 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-fbd766fb6-57d5j" podStartSLOduration=4.824578026 podStartE2EDuration="1m17.721980981s" podCreationTimestamp="2026-01-27 12:31:02 +0000 UTC" firstStartedPulling="2026-01-27 12:31:05.926465677 +0000 UTC m=+1158.731376365" lastFinishedPulling="2026-01-27 12:32:18.823868632 +0000 UTC m=+1231.628779320" observedRunningTime="2026-01-27 12:32:19.717885694 +0000 UTC m=+1232.522796382" watchObservedRunningTime="2026-01-27 12:32:19.721980981 +0000 UTC m=+1232.526891669" Jan 27 12:32:33 crc kubenswrapper[4745]: I0127 12:32:33.634787 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-fbd766fb6-57d5j" Jan 27 12:32:35 crc kubenswrapper[4745]: I0127 12:32:35.966873 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:32:35 crc kubenswrapper[4745]: I0127 12:32:35.967235 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:33:05 crc kubenswrapper[4745]: I0127 12:33:05.967113 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:33:05 crc kubenswrapper[4745]: I0127 12:33:05.967837 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:33:35 crc kubenswrapper[4745]: I0127 12:33:35.967765 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:33:35 crc kubenswrapper[4745]: I0127 12:33:35.968827 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:33:35 crc kubenswrapper[4745]: I0127 12:33:35.968914 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:33:35 crc kubenswrapper[4745]: I0127 12:33:35.971788 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b73037367855afd08946b74fe618bef765bb998591e2496436c39bc0f24265e8"} pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 12:33:35 crc kubenswrapper[4745]: I0127 12:33:35.971971 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" containerID="cri-o://b73037367855afd08946b74fe618bef765bb998591e2496436c39bc0f24265e8" gracePeriod=600 Jan 27 12:33:36 crc kubenswrapper[4745]: I0127 12:33:36.252416 4745 generic.go:334] "Generic (PLEG): container finished" podID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerID="b73037367855afd08946b74fe618bef765bb998591e2496436c39bc0f24265e8" exitCode=0 Jan 27 12:33:36 crc kubenswrapper[4745]: I0127 12:33:36.252519 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerDied","Data":"b73037367855afd08946b74fe618bef765bb998591e2496436c39bc0f24265e8"} Jan 27 12:33:36 crc kubenswrapper[4745]: I0127 12:33:36.253179 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerStarted","Data":"a6a5433ca38393ff94716bf68e0e4f44c98509e24edf8bea4957ad6fd4d223a6"} Jan 27 12:33:36 crc kubenswrapper[4745]: I0127 12:33:36.253203 4745 scope.go:117] "RemoveContainer" containerID="4bf427099ba09136d50759e57a90c739bd38ee9b8bfd72165c113357c32a5692" Jan 27 12:36:05 crc kubenswrapper[4745]: I0127 12:36:05.967444 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:36:05 crc kubenswrapper[4745]: I0127 12:36:05.968151 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:36:29 crc kubenswrapper[4745]: I0127 12:36:29.652931 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sp7xt"] Jan 27 12:36:29 crc kubenswrapper[4745]: I0127 12:36:29.655187 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sp7xt" Jan 27 12:36:29 crc kubenswrapper[4745]: I0127 12:36:29.670398 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sp7xt"] Jan 27 12:36:29 crc kubenswrapper[4745]: I0127 12:36:29.834011 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79181cc0-3318-40be-8e9c-70846c31c39a-utilities\") pod \"redhat-operators-sp7xt\" (UID: \"79181cc0-3318-40be-8e9c-70846c31c39a\") " pod="openshift-marketplace/redhat-operators-sp7xt" Jan 27 12:36:29 crc kubenswrapper[4745]: I0127 12:36:29.834092 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzmmp\" (UniqueName: \"kubernetes.io/projected/79181cc0-3318-40be-8e9c-70846c31c39a-kube-api-access-lzmmp\") pod \"redhat-operators-sp7xt\" (UID: \"79181cc0-3318-40be-8e9c-70846c31c39a\") " pod="openshift-marketplace/redhat-operators-sp7xt" Jan 27 12:36:29 crc kubenswrapper[4745]: I0127 12:36:29.834204 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79181cc0-3318-40be-8e9c-70846c31c39a-catalog-content\") pod \"redhat-operators-sp7xt\" (UID: \"79181cc0-3318-40be-8e9c-70846c31c39a\") " pod="openshift-marketplace/redhat-operators-sp7xt" Jan 27 12:36:29 crc kubenswrapper[4745]: I0127 12:36:29.935412 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79181cc0-3318-40be-8e9c-70846c31c39a-utilities\") pod \"redhat-operators-sp7xt\" (UID: \"79181cc0-3318-40be-8e9c-70846c31c39a\") " pod="openshift-marketplace/redhat-operators-sp7xt" Jan 27 12:36:29 crc kubenswrapper[4745]: I0127 12:36:29.935477 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzmmp\" (UniqueName: \"kubernetes.io/projected/79181cc0-3318-40be-8e9c-70846c31c39a-kube-api-access-lzmmp\") pod \"redhat-operators-sp7xt\" (UID: \"79181cc0-3318-40be-8e9c-70846c31c39a\") " pod="openshift-marketplace/redhat-operators-sp7xt" Jan 27 12:36:29 crc kubenswrapper[4745]: I0127 12:36:29.935552 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79181cc0-3318-40be-8e9c-70846c31c39a-catalog-content\") pod \"redhat-operators-sp7xt\" (UID: \"79181cc0-3318-40be-8e9c-70846c31c39a\") " pod="openshift-marketplace/redhat-operators-sp7xt" Jan 27 12:36:29 crc kubenswrapper[4745]: I0127 12:36:29.936058 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79181cc0-3318-40be-8e9c-70846c31c39a-utilities\") pod \"redhat-operators-sp7xt\" (UID: \"79181cc0-3318-40be-8e9c-70846c31c39a\") " pod="openshift-marketplace/redhat-operators-sp7xt" Jan 27 12:36:29 crc kubenswrapper[4745]: I0127 12:36:29.936087 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79181cc0-3318-40be-8e9c-70846c31c39a-catalog-content\") pod \"redhat-operators-sp7xt\" (UID: \"79181cc0-3318-40be-8e9c-70846c31c39a\") " pod="openshift-marketplace/redhat-operators-sp7xt" Jan 27 12:36:29 crc kubenswrapper[4745]: I0127 12:36:29.956312 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzmmp\" (UniqueName: \"kubernetes.io/projected/79181cc0-3318-40be-8e9c-70846c31c39a-kube-api-access-lzmmp\") pod \"redhat-operators-sp7xt\" (UID: \"79181cc0-3318-40be-8e9c-70846c31c39a\") " pod="openshift-marketplace/redhat-operators-sp7xt" Jan 27 12:36:29 crc kubenswrapper[4745]: I0127 12:36:29.976720 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sp7xt" Jan 27 12:36:30 crc kubenswrapper[4745]: I0127 12:36:30.441257 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sp7xt"] Jan 27 12:36:30 crc kubenswrapper[4745]: W0127 12:36:30.456935 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79181cc0_3318_40be_8e9c_70846c31c39a.slice/crio-e4fec52be1ce11a5e5233db48bdf09142ea206deb30ac8b2984c35735459d86f WatchSource:0}: Error finding container e4fec52be1ce11a5e5233db48bdf09142ea206deb30ac8b2984c35735459d86f: Status 404 returned error can't find the container with id e4fec52be1ce11a5e5233db48bdf09142ea206deb30ac8b2984c35735459d86f Jan 27 12:36:30 crc kubenswrapper[4745]: I0127 12:36:30.595609 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sp7xt" event={"ID":"79181cc0-3318-40be-8e9c-70846c31c39a","Type":"ContainerStarted","Data":"e4fec52be1ce11a5e5233db48bdf09142ea206deb30ac8b2984c35735459d86f"} Jan 27 12:36:31 crc kubenswrapper[4745]: I0127 12:36:31.605553 4745 generic.go:334] "Generic (PLEG): container finished" podID="79181cc0-3318-40be-8e9c-70846c31c39a" containerID="09b401691774909e52dbd078df545a1f48d04b821add1801297725ec4b131ec4" exitCode=0 Jan 27 12:36:31 crc kubenswrapper[4745]: I0127 12:36:31.605613 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sp7xt" event={"ID":"79181cc0-3318-40be-8e9c-70846c31c39a","Type":"ContainerDied","Data":"09b401691774909e52dbd078df545a1f48d04b821add1801297725ec4b131ec4"} Jan 27 12:36:31 crc kubenswrapper[4745]: I0127 12:36:31.607661 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 12:36:32 crc kubenswrapper[4745]: I0127 12:36:32.615899 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sp7xt" event={"ID":"79181cc0-3318-40be-8e9c-70846c31c39a","Type":"ContainerStarted","Data":"c16240b572b7eec15121276e2cccfa5f406336c41d5a2a2c258b9b9a5304cefc"} Jan 27 12:36:33 crc kubenswrapper[4745]: I0127 12:36:33.627166 4745 generic.go:334] "Generic (PLEG): container finished" podID="79181cc0-3318-40be-8e9c-70846c31c39a" containerID="c16240b572b7eec15121276e2cccfa5f406336c41d5a2a2c258b9b9a5304cefc" exitCode=0 Jan 27 12:36:33 crc kubenswrapper[4745]: I0127 12:36:33.627212 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sp7xt" event={"ID":"79181cc0-3318-40be-8e9c-70846c31c39a","Type":"ContainerDied","Data":"c16240b572b7eec15121276e2cccfa5f406336c41d5a2a2c258b9b9a5304cefc"} Jan 27 12:36:34 crc kubenswrapper[4745]: I0127 12:36:34.635690 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sp7xt" event={"ID":"79181cc0-3318-40be-8e9c-70846c31c39a","Type":"ContainerStarted","Data":"9ae84673c601d9e834d6e301a261a29b60835504c0a51e395dd7bdd18f048f24"} Jan 27 12:36:34 crc kubenswrapper[4745]: I0127 12:36:34.660008 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sp7xt" podStartSLOduration=3.229945923 podStartE2EDuration="5.659990943s" podCreationTimestamp="2026-01-27 12:36:29 +0000 UTC" firstStartedPulling="2026-01-27 12:36:31.60738926 +0000 UTC m=+1484.412299968" lastFinishedPulling="2026-01-27 12:36:34.0374343 +0000 UTC m=+1486.842344988" observedRunningTime="2026-01-27 12:36:34.653734684 +0000 UTC m=+1487.458645392" watchObservedRunningTime="2026-01-27 12:36:34.659990943 +0000 UTC m=+1487.464901631" Jan 27 12:36:35 crc kubenswrapper[4745]: I0127 12:36:35.968071 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:36:35 crc kubenswrapper[4745]: I0127 12:36:35.968609 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:36:39 crc kubenswrapper[4745]: I0127 12:36:39.977463 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sp7xt" Jan 27 12:36:39 crc kubenswrapper[4745]: I0127 12:36:39.977842 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sp7xt" Jan 27 12:36:40 crc kubenswrapper[4745]: I0127 12:36:40.111202 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sp7xt" Jan 27 12:36:40 crc kubenswrapper[4745]: I0127 12:36:40.719078 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sp7xt" Jan 27 12:36:40 crc kubenswrapper[4745]: I0127 12:36:40.775341 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sp7xt"] Jan 27 12:36:42 crc kubenswrapper[4745]: I0127 12:36:42.689487 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sp7xt" podUID="79181cc0-3318-40be-8e9c-70846c31c39a" containerName="registry-server" containerID="cri-o://9ae84673c601d9e834d6e301a261a29b60835504c0a51e395dd7bdd18f048f24" gracePeriod=2 Jan 27 12:36:45 crc kubenswrapper[4745]: I0127 12:36:45.720785 4745 generic.go:334] "Generic (PLEG): container finished" podID="79181cc0-3318-40be-8e9c-70846c31c39a" containerID="9ae84673c601d9e834d6e301a261a29b60835504c0a51e395dd7bdd18f048f24" exitCode=0 Jan 27 12:36:45 crc kubenswrapper[4745]: I0127 12:36:45.720996 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sp7xt" event={"ID":"79181cc0-3318-40be-8e9c-70846c31c39a","Type":"ContainerDied","Data":"9ae84673c601d9e834d6e301a261a29b60835504c0a51e395dd7bdd18f048f24"} Jan 27 12:36:46 crc kubenswrapper[4745]: I0127 12:36:46.633688 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sp7xt" Jan 27 12:36:46 crc kubenswrapper[4745]: I0127 12:36:46.785299 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzmmp\" (UniqueName: \"kubernetes.io/projected/79181cc0-3318-40be-8e9c-70846c31c39a-kube-api-access-lzmmp\") pod \"79181cc0-3318-40be-8e9c-70846c31c39a\" (UID: \"79181cc0-3318-40be-8e9c-70846c31c39a\") " Jan 27 12:36:46 crc kubenswrapper[4745]: I0127 12:36:46.785436 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79181cc0-3318-40be-8e9c-70846c31c39a-utilities\") pod \"79181cc0-3318-40be-8e9c-70846c31c39a\" (UID: \"79181cc0-3318-40be-8e9c-70846c31c39a\") " Jan 27 12:36:46 crc kubenswrapper[4745]: I0127 12:36:46.785489 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79181cc0-3318-40be-8e9c-70846c31c39a-catalog-content\") pod \"79181cc0-3318-40be-8e9c-70846c31c39a\" (UID: \"79181cc0-3318-40be-8e9c-70846c31c39a\") " Jan 27 12:36:46 crc kubenswrapper[4745]: I0127 12:36:46.787465 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79181cc0-3318-40be-8e9c-70846c31c39a-utilities" (OuterVolumeSpecName: "utilities") pod "79181cc0-3318-40be-8e9c-70846c31c39a" (UID: "79181cc0-3318-40be-8e9c-70846c31c39a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:36:46 crc kubenswrapper[4745]: I0127 12:36:46.793601 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79181cc0-3318-40be-8e9c-70846c31c39a-kube-api-access-lzmmp" (OuterVolumeSpecName: "kube-api-access-lzmmp") pod "79181cc0-3318-40be-8e9c-70846c31c39a" (UID: "79181cc0-3318-40be-8e9c-70846c31c39a"). InnerVolumeSpecName "kube-api-access-lzmmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:36:46 crc kubenswrapper[4745]: I0127 12:36:46.798145 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sp7xt" event={"ID":"79181cc0-3318-40be-8e9c-70846c31c39a","Type":"ContainerDied","Data":"e4fec52be1ce11a5e5233db48bdf09142ea206deb30ac8b2984c35735459d86f"} Jan 27 12:36:46 crc kubenswrapper[4745]: I0127 12:36:46.798197 4745 scope.go:117] "RemoveContainer" containerID="9ae84673c601d9e834d6e301a261a29b60835504c0a51e395dd7bdd18f048f24" Jan 27 12:36:46 crc kubenswrapper[4745]: I0127 12:36:46.798249 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sp7xt" Jan 27 12:36:46 crc kubenswrapper[4745]: I0127 12:36:46.828557 4745 scope.go:117] "RemoveContainer" containerID="c16240b572b7eec15121276e2cccfa5f406336c41d5a2a2c258b9b9a5304cefc" Jan 27 12:36:46 crc kubenswrapper[4745]: I0127 12:36:46.846408 4745 scope.go:117] "RemoveContainer" containerID="09b401691774909e52dbd078df545a1f48d04b821add1801297725ec4b131ec4" Jan 27 12:36:46 crc kubenswrapper[4745]: I0127 12:36:46.886992 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzmmp\" (UniqueName: \"kubernetes.io/projected/79181cc0-3318-40be-8e9c-70846c31c39a-kube-api-access-lzmmp\") on node \"crc\" DevicePath \"\"" Jan 27 12:36:46 crc kubenswrapper[4745]: I0127 12:36:46.887029 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79181cc0-3318-40be-8e9c-70846c31c39a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:36:46 crc kubenswrapper[4745]: I0127 12:36:46.915143 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79181cc0-3318-40be-8e9c-70846c31c39a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79181cc0-3318-40be-8e9c-70846c31c39a" (UID: "79181cc0-3318-40be-8e9c-70846c31c39a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:36:46 crc kubenswrapper[4745]: I0127 12:36:46.988191 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79181cc0-3318-40be-8e9c-70846c31c39a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:36:47 crc kubenswrapper[4745]: I0127 12:36:47.130204 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sp7xt"] Jan 27 12:36:47 crc kubenswrapper[4745]: I0127 12:36:47.140945 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sp7xt"] Jan 27 12:36:48 crc kubenswrapper[4745]: I0127 12:36:48.086083 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79181cc0-3318-40be-8e9c-70846c31c39a" path="/var/lib/kubelet/pods/79181cc0-3318-40be-8e9c-70846c31c39a/volumes" Jan 27 12:37:05 crc kubenswrapper[4745]: I0127 12:37:05.967773 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:37:05 crc kubenswrapper[4745]: I0127 12:37:05.968256 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:37:05 crc kubenswrapper[4745]: I0127 12:37:05.968301 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:37:05 crc kubenswrapper[4745]: I0127 12:37:05.968906 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6a5433ca38393ff94716bf68e0e4f44c98509e24edf8bea4957ad6fd4d223a6"} pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 12:37:05 crc kubenswrapper[4745]: I0127 12:37:05.968959 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" containerID="cri-o://a6a5433ca38393ff94716bf68e0e4f44c98509e24edf8bea4957ad6fd4d223a6" gracePeriod=600 Jan 27 12:37:06 crc kubenswrapper[4745]: I0127 12:37:06.969576 4745 generic.go:334] "Generic (PLEG): container finished" podID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerID="a6a5433ca38393ff94716bf68e0e4f44c98509e24edf8bea4957ad6fd4d223a6" exitCode=0 Jan 27 12:37:06 crc kubenswrapper[4745]: I0127 12:37:06.969678 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerDied","Data":"a6a5433ca38393ff94716bf68e0e4f44c98509e24edf8bea4957ad6fd4d223a6"} Jan 27 12:37:06 crc kubenswrapper[4745]: I0127 12:37:06.970027 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerStarted","Data":"9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5"} Jan 27 12:37:06 crc kubenswrapper[4745]: I0127 12:37:06.970058 4745 scope.go:117] "RemoveContainer" containerID="b73037367855afd08946b74fe618bef765bb998591e2496436c39bc0f24265e8" Jan 27 12:39:17 crc kubenswrapper[4745]: I0127 12:39:17.639232 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bgf6q"] Jan 27 12:39:17 crc kubenswrapper[4745]: E0127 12:39:17.640060 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79181cc0-3318-40be-8e9c-70846c31c39a" containerName="extract-utilities" Jan 27 12:39:17 crc kubenswrapper[4745]: I0127 12:39:17.640080 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="79181cc0-3318-40be-8e9c-70846c31c39a" containerName="extract-utilities" Jan 27 12:39:17 crc kubenswrapper[4745]: E0127 12:39:17.640118 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79181cc0-3318-40be-8e9c-70846c31c39a" containerName="registry-server" Jan 27 12:39:17 crc kubenswrapper[4745]: I0127 12:39:17.640126 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="79181cc0-3318-40be-8e9c-70846c31c39a" containerName="registry-server" Jan 27 12:39:17 crc kubenswrapper[4745]: E0127 12:39:17.640133 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79181cc0-3318-40be-8e9c-70846c31c39a" containerName="extract-content" Jan 27 12:39:17 crc kubenswrapper[4745]: I0127 12:39:17.640140 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="79181cc0-3318-40be-8e9c-70846c31c39a" containerName="extract-content" Jan 27 12:39:17 crc kubenswrapper[4745]: I0127 12:39:17.640317 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="79181cc0-3318-40be-8e9c-70846c31c39a" containerName="registry-server" Jan 27 12:39:17 crc kubenswrapper[4745]: I0127 12:39:17.641430 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgf6q" Jan 27 12:39:17 crc kubenswrapper[4745]: I0127 12:39:17.655148 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bgf6q"] Jan 27 12:39:17 crc kubenswrapper[4745]: I0127 12:39:17.740370 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9618160-e490-4024-a0bb-7d84dc5e8fdd-catalog-content\") pod \"community-operators-bgf6q\" (UID: \"a9618160-e490-4024-a0bb-7d84dc5e8fdd\") " pod="openshift-marketplace/community-operators-bgf6q" Jan 27 12:39:17 crc kubenswrapper[4745]: I0127 12:39:17.740751 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ctsg\" (UniqueName: \"kubernetes.io/projected/a9618160-e490-4024-a0bb-7d84dc5e8fdd-kube-api-access-4ctsg\") pod \"community-operators-bgf6q\" (UID: \"a9618160-e490-4024-a0bb-7d84dc5e8fdd\") " pod="openshift-marketplace/community-operators-bgf6q" Jan 27 12:39:17 crc kubenswrapper[4745]: I0127 12:39:17.740788 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9618160-e490-4024-a0bb-7d84dc5e8fdd-utilities\") pod \"community-operators-bgf6q\" (UID: \"a9618160-e490-4024-a0bb-7d84dc5e8fdd\") " pod="openshift-marketplace/community-operators-bgf6q" Jan 27 12:39:17 crc kubenswrapper[4745]: I0127 12:39:17.842325 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9618160-e490-4024-a0bb-7d84dc5e8fdd-catalog-content\") pod \"community-operators-bgf6q\" (UID: \"a9618160-e490-4024-a0bb-7d84dc5e8fdd\") " pod="openshift-marketplace/community-operators-bgf6q" Jan 27 12:39:17 crc kubenswrapper[4745]: I0127 12:39:17.842427 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ctsg\" (UniqueName: \"kubernetes.io/projected/a9618160-e490-4024-a0bb-7d84dc5e8fdd-kube-api-access-4ctsg\") pod \"community-operators-bgf6q\" (UID: \"a9618160-e490-4024-a0bb-7d84dc5e8fdd\") " pod="openshift-marketplace/community-operators-bgf6q" Jan 27 12:39:17 crc kubenswrapper[4745]: I0127 12:39:17.842452 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9618160-e490-4024-a0bb-7d84dc5e8fdd-utilities\") pod \"community-operators-bgf6q\" (UID: \"a9618160-e490-4024-a0bb-7d84dc5e8fdd\") " pod="openshift-marketplace/community-operators-bgf6q" Jan 27 12:39:17 crc kubenswrapper[4745]: I0127 12:39:17.843250 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9618160-e490-4024-a0bb-7d84dc5e8fdd-utilities\") pod \"community-operators-bgf6q\" (UID: \"a9618160-e490-4024-a0bb-7d84dc5e8fdd\") " pod="openshift-marketplace/community-operators-bgf6q" Jan 27 12:39:17 crc kubenswrapper[4745]: I0127 12:39:17.843531 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9618160-e490-4024-a0bb-7d84dc5e8fdd-catalog-content\") pod \"community-operators-bgf6q\" (UID: \"a9618160-e490-4024-a0bb-7d84dc5e8fdd\") " pod="openshift-marketplace/community-operators-bgf6q" Jan 27 12:39:17 crc kubenswrapper[4745]: I0127 12:39:17.863834 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ctsg\" (UniqueName: \"kubernetes.io/projected/a9618160-e490-4024-a0bb-7d84dc5e8fdd-kube-api-access-4ctsg\") pod \"community-operators-bgf6q\" (UID: \"a9618160-e490-4024-a0bb-7d84dc5e8fdd\") " pod="openshift-marketplace/community-operators-bgf6q" Jan 27 12:39:17 crc kubenswrapper[4745]: I0127 12:39:17.964304 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgf6q" Jan 27 12:39:18 crc kubenswrapper[4745]: I0127 12:39:18.551190 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bgf6q"] Jan 27 12:39:18 crc kubenswrapper[4745]: I0127 12:39:18.945942 4745 generic.go:334] "Generic (PLEG): container finished" podID="a9618160-e490-4024-a0bb-7d84dc5e8fdd" containerID="68d02f03d7662f4c4d7108e30902ca46f81185e069d562b784643dd4cd9d8cdd" exitCode=0 Jan 27 12:39:18 crc kubenswrapper[4745]: I0127 12:39:18.946005 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgf6q" event={"ID":"a9618160-e490-4024-a0bb-7d84dc5e8fdd","Type":"ContainerDied","Data":"68d02f03d7662f4c4d7108e30902ca46f81185e069d562b784643dd4cd9d8cdd"} Jan 27 12:39:18 crc kubenswrapper[4745]: I0127 12:39:18.946083 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgf6q" event={"ID":"a9618160-e490-4024-a0bb-7d84dc5e8fdd","Type":"ContainerStarted","Data":"8ee690cd05d5bc2b3b9cc496a8e5c585f03ae4933b3a3522c4c5e080c78e55cb"} Jan 27 12:39:19 crc kubenswrapper[4745]: I0127 12:39:19.962129 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgf6q" event={"ID":"a9618160-e490-4024-a0bb-7d84dc5e8fdd","Type":"ContainerStarted","Data":"a6a7e7c5cdbe447ac1c33cf2434d8f5b8e3b7fb483cdf0efce0d5330012059af"} Jan 27 12:39:20 crc kubenswrapper[4745]: I0127 12:39:20.971485 4745 generic.go:334] "Generic (PLEG): container finished" podID="a9618160-e490-4024-a0bb-7d84dc5e8fdd" containerID="a6a7e7c5cdbe447ac1c33cf2434d8f5b8e3b7fb483cdf0efce0d5330012059af" exitCode=0 Jan 27 12:39:20 crc kubenswrapper[4745]: I0127 12:39:20.971533 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgf6q" event={"ID":"a9618160-e490-4024-a0bb-7d84dc5e8fdd","Type":"ContainerDied","Data":"a6a7e7c5cdbe447ac1c33cf2434d8f5b8e3b7fb483cdf0efce0d5330012059af"} Jan 27 12:39:21 crc kubenswrapper[4745]: I0127 12:39:21.982016 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgf6q" event={"ID":"a9618160-e490-4024-a0bb-7d84dc5e8fdd","Type":"ContainerStarted","Data":"ad76c554b61fd04101c022ab9d4bd7a0cfc17f7766b786505c36fcd02fad936c"} Jan 27 12:39:22 crc kubenswrapper[4745]: I0127 12:39:22.010441 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bgf6q" podStartSLOduration=2.589646664 podStartE2EDuration="5.010424046s" podCreationTimestamp="2026-01-27 12:39:17 +0000 UTC" firstStartedPulling="2026-01-27 12:39:18.947387752 +0000 UTC m=+1651.752298450" lastFinishedPulling="2026-01-27 12:39:21.368165144 +0000 UTC m=+1654.173075832" observedRunningTime="2026-01-27 12:39:22.007912565 +0000 UTC m=+1654.812823253" watchObservedRunningTime="2026-01-27 12:39:22.010424046 +0000 UTC m=+1654.815334734" Jan 27 12:39:24 crc kubenswrapper[4745]: I0127 12:39:24.821710 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7sxkn"] Jan 27 12:39:24 crc kubenswrapper[4745]: I0127 12:39:24.823716 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sxkn" Jan 27 12:39:24 crc kubenswrapper[4745]: I0127 12:39:24.832755 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7sxkn"] Jan 27 12:39:24 crc kubenswrapper[4745]: I0127 12:39:24.953528 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k5jm\" (UniqueName: \"kubernetes.io/projected/ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5-kube-api-access-4k5jm\") pod \"certified-operators-7sxkn\" (UID: \"ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5\") " pod="openshift-marketplace/certified-operators-7sxkn" Jan 27 12:39:24 crc kubenswrapper[4745]: I0127 12:39:24.953585 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5-catalog-content\") pod \"certified-operators-7sxkn\" (UID: \"ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5\") " pod="openshift-marketplace/certified-operators-7sxkn" Jan 27 12:39:24 crc kubenswrapper[4745]: I0127 12:39:24.953609 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5-utilities\") pod \"certified-operators-7sxkn\" (UID: \"ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5\") " pod="openshift-marketplace/certified-operators-7sxkn" Jan 27 12:39:25 crc kubenswrapper[4745]: I0127 12:39:25.055452 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k5jm\" (UniqueName: \"kubernetes.io/projected/ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5-kube-api-access-4k5jm\") pod \"certified-operators-7sxkn\" (UID: \"ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5\") " pod="openshift-marketplace/certified-operators-7sxkn" Jan 27 12:39:25 crc kubenswrapper[4745]: I0127 12:39:25.055517 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5-catalog-content\") pod \"certified-operators-7sxkn\" (UID: \"ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5\") " pod="openshift-marketplace/certified-operators-7sxkn" Jan 27 12:39:25 crc kubenswrapper[4745]: I0127 12:39:25.055539 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5-utilities\") pod \"certified-operators-7sxkn\" (UID: \"ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5\") " pod="openshift-marketplace/certified-operators-7sxkn" Jan 27 12:39:25 crc kubenswrapper[4745]: I0127 12:39:25.056132 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5-utilities\") pod \"certified-operators-7sxkn\" (UID: \"ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5\") " pod="openshift-marketplace/certified-operators-7sxkn" Jan 27 12:39:25 crc kubenswrapper[4745]: I0127 12:39:25.056459 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5-catalog-content\") pod \"certified-operators-7sxkn\" (UID: \"ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5\") " pod="openshift-marketplace/certified-operators-7sxkn" Jan 27 12:39:25 crc kubenswrapper[4745]: I0127 12:39:25.080201 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k5jm\" (UniqueName: \"kubernetes.io/projected/ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5-kube-api-access-4k5jm\") pod \"certified-operators-7sxkn\" (UID: \"ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5\") " pod="openshift-marketplace/certified-operators-7sxkn" Jan 27 12:39:25 crc kubenswrapper[4745]: I0127 12:39:25.142788 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sxkn" Jan 27 12:39:25 crc kubenswrapper[4745]: I0127 12:39:25.619950 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7sxkn"] Jan 27 12:39:26 crc kubenswrapper[4745]: I0127 12:39:26.011289 4745 generic.go:334] "Generic (PLEG): container finished" podID="ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5" containerID="9d96db630327e2561ac6043e32df537a271d02697417684cf0c465bf936132ad" exitCode=0 Jan 27 12:39:26 crc kubenswrapper[4745]: I0127 12:39:26.011390 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sxkn" event={"ID":"ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5","Type":"ContainerDied","Data":"9d96db630327e2561ac6043e32df537a271d02697417684cf0c465bf936132ad"} Jan 27 12:39:26 crc kubenswrapper[4745]: I0127 12:39:26.012465 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sxkn" event={"ID":"ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5","Type":"ContainerStarted","Data":"ef32562215babba9507c38f2c6da5aab0b5071b5827a10decf75d438077d2964"} Jan 27 12:39:27 crc kubenswrapper[4745]: I0127 12:39:27.965133 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bgf6q" Jan 27 12:39:27 crc kubenswrapper[4745]: I0127 12:39:27.965464 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bgf6q" Jan 27 12:39:28 crc kubenswrapper[4745]: I0127 12:39:28.008235 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bgf6q" Jan 27 12:39:28 crc kubenswrapper[4745]: I0127 12:39:28.034177 4745 generic.go:334] "Generic (PLEG): container finished" podID="ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5" containerID="9a6a4c1b53ffcede1306d0bf41b2b588f4c088a33c8df49bf5d49c2ae679c620" exitCode=0 Jan 27 12:39:28 crc kubenswrapper[4745]: I0127 12:39:28.034318 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sxkn" event={"ID":"ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5","Type":"ContainerDied","Data":"9a6a4c1b53ffcede1306d0bf41b2b588f4c088a33c8df49bf5d49c2ae679c620"} Jan 27 12:39:28 crc kubenswrapper[4745]: I0127 12:39:28.090267 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bgf6q" Jan 27 12:39:29 crc kubenswrapper[4745]: I0127 12:39:29.048004 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sxkn" event={"ID":"ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5","Type":"ContainerStarted","Data":"c95ef8810d7e0ad0b209e96d1d33024ce925e3546741a5baa4be5b1e399e0058"} Jan 27 12:39:29 crc kubenswrapper[4745]: I0127 12:39:29.075738 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7sxkn" podStartSLOduration=2.522419469 podStartE2EDuration="5.07571266s" podCreationTimestamp="2026-01-27 12:39:24 +0000 UTC" firstStartedPulling="2026-01-27 12:39:26.012928213 +0000 UTC m=+1658.817838901" lastFinishedPulling="2026-01-27 12:39:28.566221384 +0000 UTC m=+1661.371132092" observedRunningTime="2026-01-27 12:39:29.073518378 +0000 UTC m=+1661.878429096" watchObservedRunningTime="2026-01-27 12:39:29.07571266 +0000 UTC m=+1661.880623348" Jan 27 12:39:30 crc kubenswrapper[4745]: I0127 12:39:30.207113 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bgf6q"] Jan 27 12:39:30 crc kubenswrapper[4745]: I0127 12:39:30.207325 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bgf6q" podUID="a9618160-e490-4024-a0bb-7d84dc5e8fdd" containerName="registry-server" containerID="cri-o://ad76c554b61fd04101c022ab9d4bd7a0cfc17f7766b786505c36fcd02fad936c" gracePeriod=2 Jan 27 12:39:30 crc kubenswrapper[4745]: I0127 12:39:30.691846 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgf6q" Jan 27 12:39:30 crc kubenswrapper[4745]: I0127 12:39:30.844236 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9618160-e490-4024-a0bb-7d84dc5e8fdd-utilities\") pod \"a9618160-e490-4024-a0bb-7d84dc5e8fdd\" (UID: \"a9618160-e490-4024-a0bb-7d84dc5e8fdd\") " Jan 27 12:39:30 crc kubenswrapper[4745]: I0127 12:39:30.844365 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ctsg\" (UniqueName: \"kubernetes.io/projected/a9618160-e490-4024-a0bb-7d84dc5e8fdd-kube-api-access-4ctsg\") pod \"a9618160-e490-4024-a0bb-7d84dc5e8fdd\" (UID: \"a9618160-e490-4024-a0bb-7d84dc5e8fdd\") " Jan 27 12:39:30 crc kubenswrapper[4745]: I0127 12:39:30.844393 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9618160-e490-4024-a0bb-7d84dc5e8fdd-catalog-content\") pod \"a9618160-e490-4024-a0bb-7d84dc5e8fdd\" (UID: \"a9618160-e490-4024-a0bb-7d84dc5e8fdd\") " Jan 27 12:39:30 crc kubenswrapper[4745]: I0127 12:39:30.845410 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9618160-e490-4024-a0bb-7d84dc5e8fdd-utilities" (OuterVolumeSpecName: "utilities") pod "a9618160-e490-4024-a0bb-7d84dc5e8fdd" (UID: "a9618160-e490-4024-a0bb-7d84dc5e8fdd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:39:30 crc kubenswrapper[4745]: I0127 12:39:30.849484 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9618160-e490-4024-a0bb-7d84dc5e8fdd-kube-api-access-4ctsg" (OuterVolumeSpecName: "kube-api-access-4ctsg") pod "a9618160-e490-4024-a0bb-7d84dc5e8fdd" (UID: "a9618160-e490-4024-a0bb-7d84dc5e8fdd"). InnerVolumeSpecName "kube-api-access-4ctsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:39:30 crc kubenswrapper[4745]: I0127 12:39:30.945891 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9618160-e490-4024-a0bb-7d84dc5e8fdd-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:39:30 crc kubenswrapper[4745]: I0127 12:39:30.945946 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ctsg\" (UniqueName: \"kubernetes.io/projected/a9618160-e490-4024-a0bb-7d84dc5e8fdd-kube-api-access-4ctsg\") on node \"crc\" DevicePath \"\"" Jan 27 12:39:31 crc kubenswrapper[4745]: I0127 12:39:31.064053 4745 generic.go:334] "Generic (PLEG): container finished" podID="a9618160-e490-4024-a0bb-7d84dc5e8fdd" containerID="ad76c554b61fd04101c022ab9d4bd7a0cfc17f7766b786505c36fcd02fad936c" exitCode=0 Jan 27 12:39:31 crc kubenswrapper[4745]: I0127 12:39:31.064086 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgf6q" event={"ID":"a9618160-e490-4024-a0bb-7d84dc5e8fdd","Type":"ContainerDied","Data":"ad76c554b61fd04101c022ab9d4bd7a0cfc17f7766b786505c36fcd02fad936c"} Jan 27 12:39:31 crc kubenswrapper[4745]: I0127 12:39:31.064125 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgf6q" event={"ID":"a9618160-e490-4024-a0bb-7d84dc5e8fdd","Type":"ContainerDied","Data":"8ee690cd05d5bc2b3b9cc496a8e5c585f03ae4933b3a3522c4c5e080c78e55cb"} Jan 27 12:39:31 crc kubenswrapper[4745]: I0127 12:39:31.064142 4745 scope.go:117] "RemoveContainer" containerID="ad76c554b61fd04101c022ab9d4bd7a0cfc17f7766b786505c36fcd02fad936c" Jan 27 12:39:31 crc kubenswrapper[4745]: I0127 12:39:31.064149 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgf6q" Jan 27 12:39:31 crc kubenswrapper[4745]: I0127 12:39:31.097882 4745 scope.go:117] "RemoveContainer" containerID="a6a7e7c5cdbe447ac1c33cf2434d8f5b8e3b7fb483cdf0efce0d5330012059af" Jan 27 12:39:31 crc kubenswrapper[4745]: I0127 12:39:31.123376 4745 scope.go:117] "RemoveContainer" containerID="68d02f03d7662f4c4d7108e30902ca46f81185e069d562b784643dd4cd9d8cdd" Jan 27 12:39:31 crc kubenswrapper[4745]: I0127 12:39:31.150465 4745 scope.go:117] "RemoveContainer" containerID="ad76c554b61fd04101c022ab9d4bd7a0cfc17f7766b786505c36fcd02fad936c" Jan 27 12:39:31 crc kubenswrapper[4745]: E0127 12:39:31.151028 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad76c554b61fd04101c022ab9d4bd7a0cfc17f7766b786505c36fcd02fad936c\": container with ID starting with ad76c554b61fd04101c022ab9d4bd7a0cfc17f7766b786505c36fcd02fad936c not found: ID does not exist" containerID="ad76c554b61fd04101c022ab9d4bd7a0cfc17f7766b786505c36fcd02fad936c" Jan 27 12:39:31 crc kubenswrapper[4745]: I0127 12:39:31.151066 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad76c554b61fd04101c022ab9d4bd7a0cfc17f7766b786505c36fcd02fad936c"} err="failed to get container status \"ad76c554b61fd04101c022ab9d4bd7a0cfc17f7766b786505c36fcd02fad936c\": rpc error: code = NotFound desc = could not find container \"ad76c554b61fd04101c022ab9d4bd7a0cfc17f7766b786505c36fcd02fad936c\": container with ID starting with ad76c554b61fd04101c022ab9d4bd7a0cfc17f7766b786505c36fcd02fad936c not found: ID does not exist" Jan 27 12:39:31 crc kubenswrapper[4745]: I0127 12:39:31.151091 4745 scope.go:117] "RemoveContainer" containerID="a6a7e7c5cdbe447ac1c33cf2434d8f5b8e3b7fb483cdf0efce0d5330012059af" Jan 27 12:39:31 crc kubenswrapper[4745]: E0127 12:39:31.151379 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6a7e7c5cdbe447ac1c33cf2434d8f5b8e3b7fb483cdf0efce0d5330012059af\": container with ID starting with a6a7e7c5cdbe447ac1c33cf2434d8f5b8e3b7fb483cdf0efce0d5330012059af not found: ID does not exist" containerID="a6a7e7c5cdbe447ac1c33cf2434d8f5b8e3b7fb483cdf0efce0d5330012059af" Jan 27 12:39:31 crc kubenswrapper[4745]: I0127 12:39:31.151451 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6a7e7c5cdbe447ac1c33cf2434d8f5b8e3b7fb483cdf0efce0d5330012059af"} err="failed to get container status \"a6a7e7c5cdbe447ac1c33cf2434d8f5b8e3b7fb483cdf0efce0d5330012059af\": rpc error: code = NotFound desc = could not find container \"a6a7e7c5cdbe447ac1c33cf2434d8f5b8e3b7fb483cdf0efce0d5330012059af\": container with ID starting with a6a7e7c5cdbe447ac1c33cf2434d8f5b8e3b7fb483cdf0efce0d5330012059af not found: ID does not exist" Jan 27 12:39:31 crc kubenswrapper[4745]: I0127 12:39:31.151483 4745 scope.go:117] "RemoveContainer" containerID="68d02f03d7662f4c4d7108e30902ca46f81185e069d562b784643dd4cd9d8cdd" Jan 27 12:39:31 crc kubenswrapper[4745]: E0127 12:39:31.151725 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68d02f03d7662f4c4d7108e30902ca46f81185e069d562b784643dd4cd9d8cdd\": container with ID starting with 68d02f03d7662f4c4d7108e30902ca46f81185e069d562b784643dd4cd9d8cdd not found: ID does not exist" containerID="68d02f03d7662f4c4d7108e30902ca46f81185e069d562b784643dd4cd9d8cdd" Jan 27 12:39:31 crc kubenswrapper[4745]: I0127 12:39:31.151754 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68d02f03d7662f4c4d7108e30902ca46f81185e069d562b784643dd4cd9d8cdd"} err="failed to get container status \"68d02f03d7662f4c4d7108e30902ca46f81185e069d562b784643dd4cd9d8cdd\": rpc error: code = NotFound desc = could not find container \"68d02f03d7662f4c4d7108e30902ca46f81185e069d562b784643dd4cd9d8cdd\": container with ID starting with 68d02f03d7662f4c4d7108e30902ca46f81185e069d562b784643dd4cd9d8cdd not found: ID does not exist" Jan 27 12:39:31 crc kubenswrapper[4745]: I0127 12:39:31.616362 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9618160-e490-4024-a0bb-7d84dc5e8fdd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9618160-e490-4024-a0bb-7d84dc5e8fdd" (UID: "a9618160-e490-4024-a0bb-7d84dc5e8fdd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:39:31 crc kubenswrapper[4745]: I0127 12:39:31.664412 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9618160-e490-4024-a0bb-7d84dc5e8fdd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:39:31 crc kubenswrapper[4745]: I0127 12:39:31.696919 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bgf6q"] Jan 27 12:39:31 crc kubenswrapper[4745]: I0127 12:39:31.703452 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bgf6q"] Jan 27 12:39:32 crc kubenswrapper[4745]: I0127 12:39:32.082524 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9618160-e490-4024-a0bb-7d84dc5e8fdd" path="/var/lib/kubelet/pods/a9618160-e490-4024-a0bb-7d84dc5e8fdd/volumes" Jan 27 12:39:34 crc kubenswrapper[4745]: I0127 12:39:34.417297 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j8wmg"] Jan 27 12:39:34 crc kubenswrapper[4745]: E0127 12:39:34.417876 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9618160-e490-4024-a0bb-7d84dc5e8fdd" containerName="extract-content" Jan 27 12:39:34 crc kubenswrapper[4745]: I0127 12:39:34.417889 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9618160-e490-4024-a0bb-7d84dc5e8fdd" containerName="extract-content" Jan 27 12:39:34 crc kubenswrapper[4745]: E0127 12:39:34.417912 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9618160-e490-4024-a0bb-7d84dc5e8fdd" containerName="registry-server" Jan 27 12:39:34 crc kubenswrapper[4745]: I0127 12:39:34.417919 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9618160-e490-4024-a0bb-7d84dc5e8fdd" containerName="registry-server" Jan 27 12:39:34 crc kubenswrapper[4745]: E0127 12:39:34.417929 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9618160-e490-4024-a0bb-7d84dc5e8fdd" containerName="extract-utilities" Jan 27 12:39:34 crc kubenswrapper[4745]: I0127 12:39:34.417936 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9618160-e490-4024-a0bb-7d84dc5e8fdd" containerName="extract-utilities" Jan 27 12:39:34 crc kubenswrapper[4745]: I0127 12:39:34.418082 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9618160-e490-4024-a0bb-7d84dc5e8fdd" containerName="registry-server" Jan 27 12:39:34 crc kubenswrapper[4745]: I0127 12:39:34.419112 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8wmg" Jan 27 12:39:34 crc kubenswrapper[4745]: I0127 12:39:34.432792 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8wmg"] Jan 27 12:39:34 crc kubenswrapper[4745]: I0127 12:39:34.499551 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j6rw\" (UniqueName: \"kubernetes.io/projected/7b927972-6107-4415-9bd7-fcbf0de790e2-kube-api-access-9j6rw\") pod \"redhat-marketplace-j8wmg\" (UID: \"7b927972-6107-4415-9bd7-fcbf0de790e2\") " pod="openshift-marketplace/redhat-marketplace-j8wmg" Jan 27 12:39:34 crc kubenswrapper[4745]: I0127 12:39:34.499751 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b927972-6107-4415-9bd7-fcbf0de790e2-utilities\") pod \"redhat-marketplace-j8wmg\" (UID: \"7b927972-6107-4415-9bd7-fcbf0de790e2\") " pod="openshift-marketplace/redhat-marketplace-j8wmg" Jan 27 12:39:34 crc kubenswrapper[4745]: I0127 12:39:34.499874 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b927972-6107-4415-9bd7-fcbf0de790e2-catalog-content\") pod \"redhat-marketplace-j8wmg\" (UID: \"7b927972-6107-4415-9bd7-fcbf0de790e2\") " pod="openshift-marketplace/redhat-marketplace-j8wmg" Jan 27 12:39:34 crc kubenswrapper[4745]: I0127 12:39:34.601510 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b927972-6107-4415-9bd7-fcbf0de790e2-utilities\") pod \"redhat-marketplace-j8wmg\" (UID: \"7b927972-6107-4415-9bd7-fcbf0de790e2\") " pod="openshift-marketplace/redhat-marketplace-j8wmg" Jan 27 12:39:34 crc kubenswrapper[4745]: I0127 12:39:34.602051 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b927972-6107-4415-9bd7-fcbf0de790e2-catalog-content\") pod \"redhat-marketplace-j8wmg\" (UID: \"7b927972-6107-4415-9bd7-fcbf0de790e2\") " pod="openshift-marketplace/redhat-marketplace-j8wmg" Jan 27 12:39:34 crc kubenswrapper[4745]: I0127 12:39:34.601993 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b927972-6107-4415-9bd7-fcbf0de790e2-utilities\") pod \"redhat-marketplace-j8wmg\" (UID: \"7b927972-6107-4415-9bd7-fcbf0de790e2\") " pod="openshift-marketplace/redhat-marketplace-j8wmg" Jan 27 12:39:34 crc kubenswrapper[4745]: I0127 12:39:34.602132 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j6rw\" (UniqueName: \"kubernetes.io/projected/7b927972-6107-4415-9bd7-fcbf0de790e2-kube-api-access-9j6rw\") pod \"redhat-marketplace-j8wmg\" (UID: \"7b927972-6107-4415-9bd7-fcbf0de790e2\") " pod="openshift-marketplace/redhat-marketplace-j8wmg" Jan 27 12:39:34 crc kubenswrapper[4745]: I0127 12:39:34.602504 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b927972-6107-4415-9bd7-fcbf0de790e2-catalog-content\") pod \"redhat-marketplace-j8wmg\" (UID: \"7b927972-6107-4415-9bd7-fcbf0de790e2\") " pod="openshift-marketplace/redhat-marketplace-j8wmg" Jan 27 12:39:34 crc kubenswrapper[4745]: I0127 12:39:34.622175 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j6rw\" (UniqueName: \"kubernetes.io/projected/7b927972-6107-4415-9bd7-fcbf0de790e2-kube-api-access-9j6rw\") pod \"redhat-marketplace-j8wmg\" (UID: \"7b927972-6107-4415-9bd7-fcbf0de790e2\") " pod="openshift-marketplace/redhat-marketplace-j8wmg" Jan 27 12:39:34 crc kubenswrapper[4745]: I0127 12:39:34.748839 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8wmg" Jan 27 12:39:35 crc kubenswrapper[4745]: I0127 12:39:35.143984 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7sxkn" Jan 27 12:39:35 crc kubenswrapper[4745]: I0127 12:39:35.144037 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7sxkn" Jan 27 12:39:35 crc kubenswrapper[4745]: I0127 12:39:35.198779 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7sxkn" Jan 27 12:39:35 crc kubenswrapper[4745]: I0127 12:39:35.233030 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8wmg"] Jan 27 12:39:35 crc kubenswrapper[4745]: I0127 12:39:35.966797 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:39:35 crc kubenswrapper[4745]: I0127 12:39:35.966874 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:39:36 crc kubenswrapper[4745]: I0127 12:39:36.191564 4745 generic.go:334] "Generic (PLEG): container finished" podID="7b927972-6107-4415-9bd7-fcbf0de790e2" containerID="9c33943f56e01be14ba05b40817dd35447b1755b6aba8a98b2c21f89d3ef4c19" exitCode=0 Jan 27 12:39:36 crc kubenswrapper[4745]: I0127 12:39:36.191640 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8wmg" event={"ID":"7b927972-6107-4415-9bd7-fcbf0de790e2","Type":"ContainerDied","Data":"9c33943f56e01be14ba05b40817dd35447b1755b6aba8a98b2c21f89d3ef4c19"} Jan 27 12:39:36 crc kubenswrapper[4745]: I0127 12:39:36.191698 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8wmg" event={"ID":"7b927972-6107-4415-9bd7-fcbf0de790e2","Type":"ContainerStarted","Data":"0536b0187a3ebfe73fb6a8529455a5ebc70a5490892d54deceac72703a094102"} Jan 27 12:39:36 crc kubenswrapper[4745]: I0127 12:39:36.242539 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7sxkn" Jan 27 12:39:37 crc kubenswrapper[4745]: I0127 12:39:37.612235 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7sxkn"] Jan 27 12:39:38 crc kubenswrapper[4745]: I0127 12:39:38.208662 4745 generic.go:334] "Generic (PLEG): container finished" podID="7b927972-6107-4415-9bd7-fcbf0de790e2" containerID="d538ff61dfcfb018712cc97dcfa1c0ae62d91570cde2fbcc18b13c6de0739e06" exitCode=0 Jan 27 12:39:38 crc kubenswrapper[4745]: I0127 12:39:38.208740 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8wmg" event={"ID":"7b927972-6107-4415-9bd7-fcbf0de790e2","Type":"ContainerDied","Data":"d538ff61dfcfb018712cc97dcfa1c0ae62d91570cde2fbcc18b13c6de0739e06"} Jan 27 12:39:38 crc kubenswrapper[4745]: I0127 12:39:38.211139 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7sxkn" podUID="ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5" containerName="registry-server" containerID="cri-o://c95ef8810d7e0ad0b209e96d1d33024ce925e3546741a5baa4be5b1e399e0058" gracePeriod=2 Jan 27 12:39:39 crc kubenswrapper[4745]: I0127 12:39:39.227675 4745 generic.go:334] "Generic (PLEG): container finished" podID="ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5" containerID="c95ef8810d7e0ad0b209e96d1d33024ce925e3546741a5baa4be5b1e399e0058" exitCode=0 Jan 27 12:39:39 crc kubenswrapper[4745]: I0127 12:39:39.227752 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sxkn" event={"ID":"ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5","Type":"ContainerDied","Data":"c95ef8810d7e0ad0b209e96d1d33024ce925e3546741a5baa4be5b1e399e0058"} Jan 27 12:39:39 crc kubenswrapper[4745]: I0127 12:39:39.792986 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sxkn" Jan 27 12:39:39 crc kubenswrapper[4745]: I0127 12:39:39.871636 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5-catalog-content\") pod \"ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5\" (UID: \"ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5\") " Jan 27 12:39:39 crc kubenswrapper[4745]: I0127 12:39:39.871930 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5-utilities\") pod \"ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5\" (UID: \"ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5\") " Jan 27 12:39:39 crc kubenswrapper[4745]: I0127 12:39:39.872111 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k5jm\" (UniqueName: \"kubernetes.io/projected/ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5-kube-api-access-4k5jm\") pod \"ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5\" (UID: \"ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5\") " Jan 27 12:39:39 crc kubenswrapper[4745]: I0127 12:39:39.872613 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5-utilities" (OuterVolumeSpecName: "utilities") pod "ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5" (UID: "ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:39:39 crc kubenswrapper[4745]: I0127 12:39:39.877461 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5-kube-api-access-4k5jm" (OuterVolumeSpecName: "kube-api-access-4k5jm") pod "ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5" (UID: "ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5"). InnerVolumeSpecName "kube-api-access-4k5jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:39:39 crc kubenswrapper[4745]: I0127 12:39:39.913705 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5" (UID: "ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:39:39 crc kubenswrapper[4745]: I0127 12:39:39.974076 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k5jm\" (UniqueName: \"kubernetes.io/projected/ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5-kube-api-access-4k5jm\") on node \"crc\" DevicePath \"\"" Jan 27 12:39:39 crc kubenswrapper[4745]: I0127 12:39:39.974109 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:39:39 crc kubenswrapper[4745]: I0127 12:39:39.974118 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:39:40 crc kubenswrapper[4745]: I0127 12:39:40.238562 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sxkn" event={"ID":"ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5","Type":"ContainerDied","Data":"ef32562215babba9507c38f2c6da5aab0b5071b5827a10decf75d438077d2964"} Jan 27 12:39:40 crc kubenswrapper[4745]: I0127 12:39:40.238611 4745 scope.go:117] "RemoveContainer" containerID="c95ef8810d7e0ad0b209e96d1d33024ce925e3546741a5baa4be5b1e399e0058" Jan 27 12:39:40 crc kubenswrapper[4745]: I0127 12:39:40.238643 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sxkn" Jan 27 12:39:40 crc kubenswrapper[4745]: I0127 12:39:40.241156 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8wmg" event={"ID":"7b927972-6107-4415-9bd7-fcbf0de790e2","Type":"ContainerStarted","Data":"3b49e616aac8029bdd46040fc5be8d53a7d8947625fbeaf41ac2878cb8f7a340"} Jan 27 12:39:40 crc kubenswrapper[4745]: I0127 12:39:40.259959 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7sxkn"] Jan 27 12:39:40 crc kubenswrapper[4745]: I0127 12:39:40.269567 4745 scope.go:117] "RemoveContainer" containerID="9a6a4c1b53ffcede1306d0bf41b2b588f4c088a33c8df49bf5d49c2ae679c620" Jan 27 12:39:40 crc kubenswrapper[4745]: I0127 12:39:40.271214 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7sxkn"] Jan 27 12:39:40 crc kubenswrapper[4745]: I0127 12:39:40.289082 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j8wmg" podStartSLOduration=3.283917222 podStartE2EDuration="6.289059187s" podCreationTimestamp="2026-01-27 12:39:34 +0000 UTC" firstStartedPulling="2026-01-27 12:39:36.1977353 +0000 UTC m=+1669.002645988" lastFinishedPulling="2026-01-27 12:39:39.202877265 +0000 UTC m=+1672.007787953" observedRunningTime="2026-01-27 12:39:40.281843814 +0000 UTC m=+1673.086754502" watchObservedRunningTime="2026-01-27 12:39:40.289059187 +0000 UTC m=+1673.093969875" Jan 27 12:39:40 crc kubenswrapper[4745]: I0127 12:39:40.301982 4745 scope.go:117] "RemoveContainer" containerID="9d96db630327e2561ac6043e32df537a271d02697417684cf0c465bf936132ad" Jan 27 12:39:42 crc kubenswrapper[4745]: I0127 12:39:42.083038 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5" path="/var/lib/kubelet/pods/ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5/volumes" Jan 27 12:39:44 crc kubenswrapper[4745]: I0127 12:39:44.750393 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j8wmg" Jan 27 12:39:44 crc kubenswrapper[4745]: I0127 12:39:44.750720 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j8wmg" Jan 27 12:39:44 crc kubenswrapper[4745]: I0127 12:39:44.794256 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j8wmg" Jan 27 12:39:45 crc kubenswrapper[4745]: I0127 12:39:45.330591 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j8wmg" Jan 27 12:39:46 crc kubenswrapper[4745]: I0127 12:39:46.007984 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8wmg"] Jan 27 12:39:47 crc kubenswrapper[4745]: I0127 12:39:47.298307 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j8wmg" podUID="7b927972-6107-4415-9bd7-fcbf0de790e2" containerName="registry-server" containerID="cri-o://3b49e616aac8029bdd46040fc5be8d53a7d8947625fbeaf41ac2878cb8f7a340" gracePeriod=2 Jan 27 12:39:47 crc kubenswrapper[4745]: I0127 12:39:47.701799 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8wmg" Jan 27 12:39:47 crc kubenswrapper[4745]: I0127 12:39:47.789150 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b927972-6107-4415-9bd7-fcbf0de790e2-utilities\") pod \"7b927972-6107-4415-9bd7-fcbf0de790e2\" (UID: \"7b927972-6107-4415-9bd7-fcbf0de790e2\") " Jan 27 12:39:47 crc kubenswrapper[4745]: I0127 12:39:47.789632 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b927972-6107-4415-9bd7-fcbf0de790e2-catalog-content\") pod \"7b927972-6107-4415-9bd7-fcbf0de790e2\" (UID: \"7b927972-6107-4415-9bd7-fcbf0de790e2\") " Jan 27 12:39:47 crc kubenswrapper[4745]: I0127 12:39:47.792041 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j6rw\" (UniqueName: \"kubernetes.io/projected/7b927972-6107-4415-9bd7-fcbf0de790e2-kube-api-access-9j6rw\") pod \"7b927972-6107-4415-9bd7-fcbf0de790e2\" (UID: \"7b927972-6107-4415-9bd7-fcbf0de790e2\") " Jan 27 12:39:47 crc kubenswrapper[4745]: I0127 12:39:47.790443 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b927972-6107-4415-9bd7-fcbf0de790e2-utilities" (OuterVolumeSpecName: "utilities") pod "7b927972-6107-4415-9bd7-fcbf0de790e2" (UID: "7b927972-6107-4415-9bd7-fcbf0de790e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:39:47 crc kubenswrapper[4745]: I0127 12:39:47.792351 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b927972-6107-4415-9bd7-fcbf0de790e2-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:39:47 crc kubenswrapper[4745]: I0127 12:39:47.797173 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b927972-6107-4415-9bd7-fcbf0de790e2-kube-api-access-9j6rw" (OuterVolumeSpecName: "kube-api-access-9j6rw") pod "7b927972-6107-4415-9bd7-fcbf0de790e2" (UID: "7b927972-6107-4415-9bd7-fcbf0de790e2"). InnerVolumeSpecName "kube-api-access-9j6rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:39:47 crc kubenswrapper[4745]: I0127 12:39:47.815224 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b927972-6107-4415-9bd7-fcbf0de790e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b927972-6107-4415-9bd7-fcbf0de790e2" (UID: "7b927972-6107-4415-9bd7-fcbf0de790e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:39:47 crc kubenswrapper[4745]: I0127 12:39:47.894158 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b927972-6107-4415-9bd7-fcbf0de790e2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:39:47 crc kubenswrapper[4745]: I0127 12:39:47.894191 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j6rw\" (UniqueName: \"kubernetes.io/projected/7b927972-6107-4415-9bd7-fcbf0de790e2-kube-api-access-9j6rw\") on node \"crc\" DevicePath \"\"" Jan 27 12:39:48 crc kubenswrapper[4745]: I0127 12:39:48.305763 4745 generic.go:334] "Generic (PLEG): container finished" podID="7b927972-6107-4415-9bd7-fcbf0de790e2" containerID="3b49e616aac8029bdd46040fc5be8d53a7d8947625fbeaf41ac2878cb8f7a340" exitCode=0 Jan 27 12:39:48 crc kubenswrapper[4745]: I0127 12:39:48.306111 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8wmg" event={"ID":"7b927972-6107-4415-9bd7-fcbf0de790e2","Type":"ContainerDied","Data":"3b49e616aac8029bdd46040fc5be8d53a7d8947625fbeaf41ac2878cb8f7a340"} Jan 27 12:39:48 crc kubenswrapper[4745]: I0127 12:39:48.306534 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8wmg" event={"ID":"7b927972-6107-4415-9bd7-fcbf0de790e2","Type":"ContainerDied","Data":"0536b0187a3ebfe73fb6a8529455a5ebc70a5490892d54deceac72703a094102"} Jan 27 12:39:48 crc kubenswrapper[4745]: I0127 12:39:48.306568 4745 scope.go:117] "RemoveContainer" containerID="3b49e616aac8029bdd46040fc5be8d53a7d8947625fbeaf41ac2878cb8f7a340" Jan 27 12:39:48 crc kubenswrapper[4745]: I0127 12:39:48.306721 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8wmg" Jan 27 12:39:48 crc kubenswrapper[4745]: I0127 12:39:48.329338 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8wmg"] Jan 27 12:39:48 crc kubenswrapper[4745]: I0127 12:39:48.330094 4745 scope.go:117] "RemoveContainer" containerID="d538ff61dfcfb018712cc97dcfa1c0ae62d91570cde2fbcc18b13c6de0739e06" Jan 27 12:39:48 crc kubenswrapper[4745]: I0127 12:39:48.336349 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8wmg"] Jan 27 12:39:48 crc kubenswrapper[4745]: I0127 12:39:48.346721 4745 scope.go:117] "RemoveContainer" containerID="9c33943f56e01be14ba05b40817dd35447b1755b6aba8a98b2c21f89d3ef4c19" Jan 27 12:39:48 crc kubenswrapper[4745]: I0127 12:39:48.375141 4745 scope.go:117] "RemoveContainer" containerID="3b49e616aac8029bdd46040fc5be8d53a7d8947625fbeaf41ac2878cb8f7a340" Jan 27 12:39:48 crc kubenswrapper[4745]: E0127 12:39:48.375727 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b49e616aac8029bdd46040fc5be8d53a7d8947625fbeaf41ac2878cb8f7a340\": container with ID starting with 3b49e616aac8029bdd46040fc5be8d53a7d8947625fbeaf41ac2878cb8f7a340 not found: ID does not exist" containerID="3b49e616aac8029bdd46040fc5be8d53a7d8947625fbeaf41ac2878cb8f7a340" Jan 27 12:39:48 crc kubenswrapper[4745]: I0127 12:39:48.375774 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b49e616aac8029bdd46040fc5be8d53a7d8947625fbeaf41ac2878cb8f7a340"} err="failed to get container status \"3b49e616aac8029bdd46040fc5be8d53a7d8947625fbeaf41ac2878cb8f7a340\": rpc error: code = NotFound desc = could not find container \"3b49e616aac8029bdd46040fc5be8d53a7d8947625fbeaf41ac2878cb8f7a340\": container with ID starting with 3b49e616aac8029bdd46040fc5be8d53a7d8947625fbeaf41ac2878cb8f7a340 not found: ID does not exist" Jan 27 12:39:48 crc kubenswrapper[4745]: I0127 12:39:48.376043 4745 scope.go:117] "RemoveContainer" containerID="d538ff61dfcfb018712cc97dcfa1c0ae62d91570cde2fbcc18b13c6de0739e06" Jan 27 12:39:48 crc kubenswrapper[4745]: E0127 12:39:48.376578 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d538ff61dfcfb018712cc97dcfa1c0ae62d91570cde2fbcc18b13c6de0739e06\": container with ID starting with d538ff61dfcfb018712cc97dcfa1c0ae62d91570cde2fbcc18b13c6de0739e06 not found: ID does not exist" containerID="d538ff61dfcfb018712cc97dcfa1c0ae62d91570cde2fbcc18b13c6de0739e06" Jan 27 12:39:48 crc kubenswrapper[4745]: I0127 12:39:48.376614 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d538ff61dfcfb018712cc97dcfa1c0ae62d91570cde2fbcc18b13c6de0739e06"} err="failed to get container status \"d538ff61dfcfb018712cc97dcfa1c0ae62d91570cde2fbcc18b13c6de0739e06\": rpc error: code = NotFound desc = could not find container \"d538ff61dfcfb018712cc97dcfa1c0ae62d91570cde2fbcc18b13c6de0739e06\": container with ID starting with d538ff61dfcfb018712cc97dcfa1c0ae62d91570cde2fbcc18b13c6de0739e06 not found: ID does not exist" Jan 27 12:39:48 crc kubenswrapper[4745]: I0127 12:39:48.376649 4745 scope.go:117] "RemoveContainer" containerID="9c33943f56e01be14ba05b40817dd35447b1755b6aba8a98b2c21f89d3ef4c19" Jan 27 12:39:48 crc kubenswrapper[4745]: E0127 12:39:48.376942 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c33943f56e01be14ba05b40817dd35447b1755b6aba8a98b2c21f89d3ef4c19\": container with ID starting with 9c33943f56e01be14ba05b40817dd35447b1755b6aba8a98b2c21f89d3ef4c19 not found: ID does not exist" containerID="9c33943f56e01be14ba05b40817dd35447b1755b6aba8a98b2c21f89d3ef4c19" Jan 27 12:39:48 crc kubenswrapper[4745]: I0127 12:39:48.376972 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c33943f56e01be14ba05b40817dd35447b1755b6aba8a98b2c21f89d3ef4c19"} err="failed to get container status \"9c33943f56e01be14ba05b40817dd35447b1755b6aba8a98b2c21f89d3ef4c19\": rpc error: code = NotFound desc = could not find container \"9c33943f56e01be14ba05b40817dd35447b1755b6aba8a98b2c21f89d3ef4c19\": container with ID starting with 9c33943f56e01be14ba05b40817dd35447b1755b6aba8a98b2c21f89d3ef4c19 not found: ID does not exist" Jan 27 12:39:50 crc kubenswrapper[4745]: I0127 12:39:50.082266 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b927972-6107-4415-9bd7-fcbf0de790e2" path="/var/lib/kubelet/pods/7b927972-6107-4415-9bd7-fcbf0de790e2/volumes" Jan 27 12:40:05 crc kubenswrapper[4745]: I0127 12:40:05.966918 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:40:05 crc kubenswrapper[4745]: I0127 12:40:05.967501 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:40:35 crc kubenswrapper[4745]: I0127 12:40:35.966925 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:40:35 crc kubenswrapper[4745]: I0127 12:40:35.967511 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:40:35 crc kubenswrapper[4745]: I0127 12:40:35.967573 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:40:35 crc kubenswrapper[4745]: I0127 12:40:35.968332 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5"} pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 12:40:35 crc kubenswrapper[4745]: I0127 12:40:35.968406 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" containerID="cri-o://9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" gracePeriod=600 Jan 27 12:40:36 crc kubenswrapper[4745]: E0127 12:40:36.123925 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:40:36 crc kubenswrapper[4745]: I0127 12:40:36.656421 4745 generic.go:334] "Generic (PLEG): container finished" podID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" exitCode=0 Jan 27 12:40:36 crc kubenswrapper[4745]: I0127 12:40:36.656493 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerDied","Data":"9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5"} Jan 27 12:40:36 crc kubenswrapper[4745]: I0127 12:40:36.656752 4745 scope.go:117] "RemoveContainer" containerID="a6a5433ca38393ff94716bf68e0e4f44c98509e24edf8bea4957ad6fd4d223a6" Jan 27 12:40:36 crc kubenswrapper[4745]: I0127 12:40:36.657237 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:40:36 crc kubenswrapper[4745]: E0127 12:40:36.657473 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:40:48 crc kubenswrapper[4745]: I0127 12:40:48.083938 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:40:48 crc kubenswrapper[4745]: E0127 12:40:48.085133 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:41:00 crc kubenswrapper[4745]: I0127 12:41:00.074687 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:41:00 crc kubenswrapper[4745]: E0127 12:41:00.075311 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:41:11 crc kubenswrapper[4745]: I0127 12:41:11.074114 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:41:11 crc kubenswrapper[4745]: E0127 12:41:11.075118 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:41:24 crc kubenswrapper[4745]: I0127 12:41:24.074563 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:41:24 crc kubenswrapper[4745]: E0127 12:41:24.075731 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:41:36 crc kubenswrapper[4745]: I0127 12:41:36.074154 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:41:36 crc kubenswrapper[4745]: E0127 12:41:36.074872 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:41:50 crc kubenswrapper[4745]: I0127 12:41:50.073669 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:41:50 crc kubenswrapper[4745]: E0127 12:41:50.074351 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:42:05 crc kubenswrapper[4745]: I0127 12:42:05.074197 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:42:05 crc kubenswrapper[4745]: E0127 12:42:05.074893 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:42:17 crc kubenswrapper[4745]: I0127 12:42:17.073340 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:42:17 crc kubenswrapper[4745]: E0127 12:42:17.073923 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:42:30 crc kubenswrapper[4745]: I0127 12:42:30.074394 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:42:30 crc kubenswrapper[4745]: E0127 12:42:30.075346 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:42:43 crc kubenswrapper[4745]: I0127 12:42:43.073947 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:42:43 crc kubenswrapper[4745]: E0127 12:42:43.074596 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:42:58 crc kubenswrapper[4745]: I0127 12:42:58.078842 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:42:58 crc kubenswrapper[4745]: E0127 12:42:58.079685 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:43:12 crc kubenswrapper[4745]: I0127 12:43:12.073761 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:43:12 crc kubenswrapper[4745]: E0127 12:43:12.074540 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:43:26 crc kubenswrapper[4745]: I0127 12:43:26.074308 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:43:26 crc kubenswrapper[4745]: E0127 12:43:26.075260 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:43:38 crc kubenswrapper[4745]: I0127 12:43:38.077000 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:43:38 crc kubenswrapper[4745]: E0127 12:43:38.077715 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:43:50 crc kubenswrapper[4745]: I0127 12:43:50.074271 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:43:50 crc kubenswrapper[4745]: E0127 12:43:50.076994 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:44:05 crc kubenswrapper[4745]: I0127 12:44:05.073724 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:44:05 crc kubenswrapper[4745]: E0127 12:44:05.074445 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:44:16 crc kubenswrapper[4745]: I0127 12:44:16.074429 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:44:16 crc kubenswrapper[4745]: E0127 12:44:16.075318 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:44:27 crc kubenswrapper[4745]: I0127 12:44:27.073298 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:44:27 crc kubenswrapper[4745]: E0127 12:44:27.075275 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:44:42 crc kubenswrapper[4745]: I0127 12:44:42.073640 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:44:42 crc kubenswrapper[4745]: E0127 12:44:42.074212 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:44:54 crc kubenswrapper[4745]: I0127 12:44:54.074262 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:44:54 crc kubenswrapper[4745]: E0127 12:44:54.075405 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.150005 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491965-kjxlw"] Jan 27 12:45:00 crc kubenswrapper[4745]: E0127 12:45:00.151630 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5" containerName="extract-content" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.151649 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5" containerName="extract-content" Jan 27 12:45:00 crc kubenswrapper[4745]: E0127 12:45:00.151673 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b927972-6107-4415-9bd7-fcbf0de790e2" containerName="registry-server" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.151681 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b927972-6107-4415-9bd7-fcbf0de790e2" containerName="registry-server" Jan 27 12:45:00 crc kubenswrapper[4745]: E0127 12:45:00.151691 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b927972-6107-4415-9bd7-fcbf0de790e2" containerName="extract-utilities" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.151699 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b927972-6107-4415-9bd7-fcbf0de790e2" containerName="extract-utilities" Jan 27 12:45:00 crc kubenswrapper[4745]: E0127 12:45:00.151723 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5" containerName="extract-utilities" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.151732 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5" containerName="extract-utilities" Jan 27 12:45:00 crc kubenswrapper[4745]: E0127 12:45:00.151743 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b927972-6107-4415-9bd7-fcbf0de790e2" containerName="extract-content" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.151751 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b927972-6107-4415-9bd7-fcbf0de790e2" containerName="extract-content" Jan 27 12:45:00 crc kubenswrapper[4745]: E0127 12:45:00.151763 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5" containerName="registry-server" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.151771 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5" containerName="registry-server" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.151967 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b927972-6107-4415-9bd7-fcbf0de790e2" containerName="registry-server" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.151994 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba50a1ac-62ea-4f80-ae91-64bc0c5e8be5" containerName="registry-server" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.152971 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491965-kjxlw" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.156270 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.156551 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.164627 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491965-kjxlw"] Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.227645 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff4aa978-b605-46dc-9603-96638efd0c73-secret-volume\") pod \"collect-profiles-29491965-kjxlw\" (UID: \"ff4aa978-b605-46dc-9603-96638efd0c73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491965-kjxlw" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.227939 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff4aa978-b605-46dc-9603-96638efd0c73-config-volume\") pod \"collect-profiles-29491965-kjxlw\" (UID: \"ff4aa978-b605-46dc-9603-96638efd0c73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491965-kjxlw" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.227991 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n45w\" (UniqueName: \"kubernetes.io/projected/ff4aa978-b605-46dc-9603-96638efd0c73-kube-api-access-7n45w\") pod \"collect-profiles-29491965-kjxlw\" (UID: \"ff4aa978-b605-46dc-9603-96638efd0c73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491965-kjxlw" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.329168 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff4aa978-b605-46dc-9603-96638efd0c73-secret-volume\") pod \"collect-profiles-29491965-kjxlw\" (UID: \"ff4aa978-b605-46dc-9603-96638efd0c73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491965-kjxlw" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.329278 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff4aa978-b605-46dc-9603-96638efd0c73-config-volume\") pod \"collect-profiles-29491965-kjxlw\" (UID: \"ff4aa978-b605-46dc-9603-96638efd0c73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491965-kjxlw" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.329308 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n45w\" (UniqueName: \"kubernetes.io/projected/ff4aa978-b605-46dc-9603-96638efd0c73-kube-api-access-7n45w\") pod \"collect-profiles-29491965-kjxlw\" (UID: \"ff4aa978-b605-46dc-9603-96638efd0c73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491965-kjxlw" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.330417 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff4aa978-b605-46dc-9603-96638efd0c73-config-volume\") pod \"collect-profiles-29491965-kjxlw\" (UID: \"ff4aa978-b605-46dc-9603-96638efd0c73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491965-kjxlw" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.335435 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff4aa978-b605-46dc-9603-96638efd0c73-secret-volume\") pod \"collect-profiles-29491965-kjxlw\" (UID: \"ff4aa978-b605-46dc-9603-96638efd0c73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491965-kjxlw" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.351594 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n45w\" (UniqueName: \"kubernetes.io/projected/ff4aa978-b605-46dc-9603-96638efd0c73-kube-api-access-7n45w\") pod \"collect-profiles-29491965-kjxlw\" (UID: \"ff4aa978-b605-46dc-9603-96638efd0c73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491965-kjxlw" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.486038 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491965-kjxlw" Jan 27 12:45:00 crc kubenswrapper[4745]: I0127 12:45:00.918039 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491965-kjxlw"] Jan 27 12:45:01 crc kubenswrapper[4745]: I0127 12:45:01.867210 4745 generic.go:334] "Generic (PLEG): container finished" podID="ff4aa978-b605-46dc-9603-96638efd0c73" containerID="b5b3be6f4707d4160e8100e11c91bbb54266b6f7a996db256f03f748d6b112c4" exitCode=0 Jan 27 12:45:01 crc kubenswrapper[4745]: I0127 12:45:01.867276 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491965-kjxlw" event={"ID":"ff4aa978-b605-46dc-9603-96638efd0c73","Type":"ContainerDied","Data":"b5b3be6f4707d4160e8100e11c91bbb54266b6f7a996db256f03f748d6b112c4"} Jan 27 12:45:01 crc kubenswrapper[4745]: I0127 12:45:01.867616 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491965-kjxlw" event={"ID":"ff4aa978-b605-46dc-9603-96638efd0c73","Type":"ContainerStarted","Data":"5ec1fa63f800d8c867fb8f80ca1934554804d66e8f20c4d155f697865d49cdc9"} Jan 27 12:45:03 crc kubenswrapper[4745]: I0127 12:45:03.130884 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491965-kjxlw" Jan 27 12:45:03 crc kubenswrapper[4745]: I0127 12:45:03.171849 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff4aa978-b605-46dc-9603-96638efd0c73-config-volume\") pod \"ff4aa978-b605-46dc-9603-96638efd0c73\" (UID: \"ff4aa978-b605-46dc-9603-96638efd0c73\") " Jan 27 12:45:03 crc kubenswrapper[4745]: I0127 12:45:03.171904 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n45w\" (UniqueName: \"kubernetes.io/projected/ff4aa978-b605-46dc-9603-96638efd0c73-kube-api-access-7n45w\") pod \"ff4aa978-b605-46dc-9603-96638efd0c73\" (UID: \"ff4aa978-b605-46dc-9603-96638efd0c73\") " Jan 27 12:45:03 crc kubenswrapper[4745]: I0127 12:45:03.172014 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff4aa978-b605-46dc-9603-96638efd0c73-secret-volume\") pod \"ff4aa978-b605-46dc-9603-96638efd0c73\" (UID: \"ff4aa978-b605-46dc-9603-96638efd0c73\") " Jan 27 12:45:03 crc kubenswrapper[4745]: I0127 12:45:03.172696 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff4aa978-b605-46dc-9603-96638efd0c73-config-volume" (OuterVolumeSpecName: "config-volume") pod "ff4aa978-b605-46dc-9603-96638efd0c73" (UID: "ff4aa978-b605-46dc-9603-96638efd0c73"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 12:45:03 crc kubenswrapper[4745]: I0127 12:45:03.178013 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff4aa978-b605-46dc-9603-96638efd0c73-kube-api-access-7n45w" (OuterVolumeSpecName: "kube-api-access-7n45w") pod "ff4aa978-b605-46dc-9603-96638efd0c73" (UID: "ff4aa978-b605-46dc-9603-96638efd0c73"). InnerVolumeSpecName "kube-api-access-7n45w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:45:03 crc kubenswrapper[4745]: I0127 12:45:03.178026 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff4aa978-b605-46dc-9603-96638efd0c73-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ff4aa978-b605-46dc-9603-96638efd0c73" (UID: "ff4aa978-b605-46dc-9603-96638efd0c73"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 12:45:03 crc kubenswrapper[4745]: I0127 12:45:03.273923 4745 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff4aa978-b605-46dc-9603-96638efd0c73-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 12:45:03 crc kubenswrapper[4745]: I0127 12:45:03.273970 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n45w\" (UniqueName: \"kubernetes.io/projected/ff4aa978-b605-46dc-9603-96638efd0c73-kube-api-access-7n45w\") on node \"crc\" DevicePath \"\"" Jan 27 12:45:03 crc kubenswrapper[4745]: I0127 12:45:03.273985 4745 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff4aa978-b605-46dc-9603-96638efd0c73-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 12:45:03 crc kubenswrapper[4745]: I0127 12:45:03.883527 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491965-kjxlw" event={"ID":"ff4aa978-b605-46dc-9603-96638efd0c73","Type":"ContainerDied","Data":"5ec1fa63f800d8c867fb8f80ca1934554804d66e8f20c4d155f697865d49cdc9"} Jan 27 12:45:03 crc kubenswrapper[4745]: I0127 12:45:03.883575 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ec1fa63f800d8c867fb8f80ca1934554804d66e8f20c4d155f697865d49cdc9" Jan 27 12:45:03 crc kubenswrapper[4745]: I0127 12:45:03.883582 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491965-kjxlw" Jan 27 12:45:04 crc kubenswrapper[4745]: I0127 12:45:04.206872 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm"] Jan 27 12:45:04 crc kubenswrapper[4745]: I0127 12:45:04.212687 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491920-s88fm"] Jan 27 12:45:06 crc kubenswrapper[4745]: I0127 12:45:06.074149 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:45:06 crc kubenswrapper[4745]: E0127 12:45:06.074638 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:45:06 crc kubenswrapper[4745]: I0127 12:45:06.085269 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6086ad74-5d02-4181-bb34-8c116409de42" path="/var/lib/kubelet/pods/6086ad74-5d02-4181-bb34-8c116409de42/volumes" Jan 27 12:45:17 crc kubenswrapper[4745]: I0127 12:45:17.073753 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:45:17 crc kubenswrapper[4745]: E0127 12:45:17.075938 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:45:29 crc kubenswrapper[4745]: I0127 12:45:29.073490 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:45:29 crc kubenswrapper[4745]: E0127 12:45:29.075553 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:45:40 crc kubenswrapper[4745]: I0127 12:45:40.074535 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:45:41 crc kubenswrapper[4745]: I0127 12:45:41.163026 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerStarted","Data":"7326449f06a2e234c2eecde59157e50c93e734223f6ac7d6d6560b60db07b650"} Jan 27 12:45:56 crc kubenswrapper[4745]: I0127 12:45:56.386955 4745 scope.go:117] "RemoveContainer" containerID="60ca90d8c87883fb7d78bb2a23252d2963ddce1f3954fa70ad400f9d46849a47" Jan 27 12:47:39 crc kubenswrapper[4745]: I0127 12:47:39.826162 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qld7z"] Jan 27 12:47:39 crc kubenswrapper[4745]: E0127 12:47:39.827045 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff4aa978-b605-46dc-9603-96638efd0c73" containerName="collect-profiles" Jan 27 12:47:39 crc kubenswrapper[4745]: I0127 12:47:39.827064 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff4aa978-b605-46dc-9603-96638efd0c73" containerName="collect-profiles" Jan 27 12:47:39 crc kubenswrapper[4745]: I0127 12:47:39.827252 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff4aa978-b605-46dc-9603-96638efd0c73" containerName="collect-profiles" Jan 27 12:47:39 crc kubenswrapper[4745]: I0127 12:47:39.828634 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qld7z" Jan 27 12:47:39 crc kubenswrapper[4745]: I0127 12:47:39.841720 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qld7z"] Jan 27 12:47:39 crc kubenswrapper[4745]: I0127 12:47:39.850491 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlsvg\" (UniqueName: \"kubernetes.io/projected/39b74c1e-98f9-4ad0-9a5c-246abf4d60a4-kube-api-access-wlsvg\") pod \"redhat-operators-qld7z\" (UID: \"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4\") " pod="openshift-marketplace/redhat-operators-qld7z" Jan 27 12:47:39 crc kubenswrapper[4745]: I0127 12:47:39.850616 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b74c1e-98f9-4ad0-9a5c-246abf4d60a4-utilities\") pod \"redhat-operators-qld7z\" (UID: \"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4\") " pod="openshift-marketplace/redhat-operators-qld7z" Jan 27 12:47:39 crc kubenswrapper[4745]: I0127 12:47:39.850728 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b74c1e-98f9-4ad0-9a5c-246abf4d60a4-catalog-content\") pod \"redhat-operators-qld7z\" (UID: \"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4\") " pod="openshift-marketplace/redhat-operators-qld7z" Jan 27 12:47:39 crc kubenswrapper[4745]: I0127 12:47:39.951755 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b74c1e-98f9-4ad0-9a5c-246abf4d60a4-utilities\") pod \"redhat-operators-qld7z\" (UID: \"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4\") " pod="openshift-marketplace/redhat-operators-qld7z" Jan 27 12:47:39 crc kubenswrapper[4745]: I0127 12:47:39.951840 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b74c1e-98f9-4ad0-9a5c-246abf4d60a4-catalog-content\") pod \"redhat-operators-qld7z\" (UID: \"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4\") " pod="openshift-marketplace/redhat-operators-qld7z" Jan 27 12:47:39 crc kubenswrapper[4745]: I0127 12:47:39.951900 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlsvg\" (UniqueName: \"kubernetes.io/projected/39b74c1e-98f9-4ad0-9a5c-246abf4d60a4-kube-api-access-wlsvg\") pod \"redhat-operators-qld7z\" (UID: \"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4\") " pod="openshift-marketplace/redhat-operators-qld7z" Jan 27 12:47:39 crc kubenswrapper[4745]: I0127 12:47:39.952398 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b74c1e-98f9-4ad0-9a5c-246abf4d60a4-utilities\") pod \"redhat-operators-qld7z\" (UID: \"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4\") " pod="openshift-marketplace/redhat-operators-qld7z" Jan 27 12:47:39 crc kubenswrapper[4745]: I0127 12:47:39.952444 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b74c1e-98f9-4ad0-9a5c-246abf4d60a4-catalog-content\") pod \"redhat-operators-qld7z\" (UID: \"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4\") " pod="openshift-marketplace/redhat-operators-qld7z" Jan 27 12:47:39 crc kubenswrapper[4745]: I0127 12:47:39.973689 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlsvg\" (UniqueName: \"kubernetes.io/projected/39b74c1e-98f9-4ad0-9a5c-246abf4d60a4-kube-api-access-wlsvg\") pod \"redhat-operators-qld7z\" (UID: \"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4\") " pod="openshift-marketplace/redhat-operators-qld7z" Jan 27 12:47:40 crc kubenswrapper[4745]: I0127 12:47:40.150118 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qld7z" Jan 27 12:47:40 crc kubenswrapper[4745]: I0127 12:47:40.604579 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qld7z"] Jan 27 12:47:40 crc kubenswrapper[4745]: I0127 12:47:40.981890 4745 generic.go:334] "Generic (PLEG): container finished" podID="39b74c1e-98f9-4ad0-9a5c-246abf4d60a4" containerID="46ff9ae39e9a821d1b926c4627335efa40734ed7a02c1e0a0b1f2b78f97a201b" exitCode=0 Jan 27 12:47:40 crc kubenswrapper[4745]: I0127 12:47:40.982192 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qld7z" event={"ID":"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4","Type":"ContainerDied","Data":"46ff9ae39e9a821d1b926c4627335efa40734ed7a02c1e0a0b1f2b78f97a201b"} Jan 27 12:47:40 crc kubenswrapper[4745]: I0127 12:47:40.982228 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qld7z" event={"ID":"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4","Type":"ContainerStarted","Data":"aaaceffaf7cc9c1c323a58683a04fba45a8481ea2eef609723d02159904f13c7"} Jan 27 12:47:40 crc kubenswrapper[4745]: I0127 12:47:40.984724 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 12:47:42 crc kubenswrapper[4745]: I0127 12:47:42.996916 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qld7z" event={"ID":"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4","Type":"ContainerStarted","Data":"7ed39443f08c64459b0da1ff644cb2b64c93b0acb2a84e545248ab8786bcb06d"} Jan 27 12:47:44 crc kubenswrapper[4745]: I0127 12:47:44.007458 4745 generic.go:334] "Generic (PLEG): container finished" podID="39b74c1e-98f9-4ad0-9a5c-246abf4d60a4" containerID="7ed39443f08c64459b0da1ff644cb2b64c93b0acb2a84e545248ab8786bcb06d" exitCode=0 Jan 27 12:47:44 crc kubenswrapper[4745]: I0127 12:47:44.007531 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qld7z" event={"ID":"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4","Type":"ContainerDied","Data":"7ed39443f08c64459b0da1ff644cb2b64c93b0acb2a84e545248ab8786bcb06d"} Jan 27 12:47:45 crc kubenswrapper[4745]: I0127 12:47:45.019409 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qld7z" event={"ID":"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4","Type":"ContainerStarted","Data":"10cef91a78c3ad4cf9725eff8daf4698da570472210a25d9579d05a21a87927e"} Jan 27 12:47:45 crc kubenswrapper[4745]: I0127 12:47:45.038701 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qld7z" podStartSLOduration=2.626375582 podStartE2EDuration="6.038679636s" podCreationTimestamp="2026-01-27 12:47:39 +0000 UTC" firstStartedPulling="2026-01-27 12:47:40.984308381 +0000 UTC m=+2153.789219069" lastFinishedPulling="2026-01-27 12:47:44.396612435 +0000 UTC m=+2157.201523123" observedRunningTime="2026-01-27 12:47:45.034943518 +0000 UTC m=+2157.839854226" watchObservedRunningTime="2026-01-27 12:47:45.038679636 +0000 UTC m=+2157.843590324" Jan 27 12:47:50 crc kubenswrapper[4745]: I0127 12:47:50.150998 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qld7z" Jan 27 12:47:50 crc kubenswrapper[4745]: I0127 12:47:50.151378 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qld7z" Jan 27 12:47:50 crc kubenswrapper[4745]: I0127 12:47:50.201377 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qld7z" Jan 27 12:47:51 crc kubenswrapper[4745]: I0127 12:47:51.104617 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qld7z" Jan 27 12:47:51 crc kubenswrapper[4745]: I0127 12:47:51.148512 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qld7z"] Jan 27 12:47:53 crc kubenswrapper[4745]: I0127 12:47:53.076256 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qld7z" podUID="39b74c1e-98f9-4ad0-9a5c-246abf4d60a4" containerName="registry-server" containerID="cri-o://10cef91a78c3ad4cf9725eff8daf4698da570472210a25d9579d05a21a87927e" gracePeriod=2 Jan 27 12:47:54 crc kubenswrapper[4745]: I0127 12:47:54.007253 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qld7z" Jan 27 12:47:54 crc kubenswrapper[4745]: I0127 12:47:54.057885 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlsvg\" (UniqueName: \"kubernetes.io/projected/39b74c1e-98f9-4ad0-9a5c-246abf4d60a4-kube-api-access-wlsvg\") pod \"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4\" (UID: \"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4\") " Jan 27 12:47:54 crc kubenswrapper[4745]: I0127 12:47:54.058001 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b74c1e-98f9-4ad0-9a5c-246abf4d60a4-catalog-content\") pod \"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4\" (UID: \"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4\") " Jan 27 12:47:54 crc kubenswrapper[4745]: I0127 12:47:54.058025 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b74c1e-98f9-4ad0-9a5c-246abf4d60a4-utilities\") pod \"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4\" (UID: \"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4\") " Jan 27 12:47:54 crc kubenswrapper[4745]: I0127 12:47:54.059142 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39b74c1e-98f9-4ad0-9a5c-246abf4d60a4-utilities" (OuterVolumeSpecName: "utilities") pod "39b74c1e-98f9-4ad0-9a5c-246abf4d60a4" (UID: "39b74c1e-98f9-4ad0-9a5c-246abf4d60a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:47:54 crc kubenswrapper[4745]: I0127 12:47:54.063913 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b74c1e-98f9-4ad0-9a5c-246abf4d60a4-kube-api-access-wlsvg" (OuterVolumeSpecName: "kube-api-access-wlsvg") pod "39b74c1e-98f9-4ad0-9a5c-246abf4d60a4" (UID: "39b74c1e-98f9-4ad0-9a5c-246abf4d60a4"). InnerVolumeSpecName "kube-api-access-wlsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:47:54 crc kubenswrapper[4745]: I0127 12:47:54.119069 4745 generic.go:334] "Generic (PLEG): container finished" podID="39b74c1e-98f9-4ad0-9a5c-246abf4d60a4" containerID="10cef91a78c3ad4cf9725eff8daf4698da570472210a25d9579d05a21a87927e" exitCode=0 Jan 27 12:47:54 crc kubenswrapper[4745]: I0127 12:47:54.119189 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qld7z" Jan 27 12:47:54 crc kubenswrapper[4745]: I0127 12:47:54.119241 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qld7z" event={"ID":"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4","Type":"ContainerDied","Data":"10cef91a78c3ad4cf9725eff8daf4698da570472210a25d9579d05a21a87927e"} Jan 27 12:47:54 crc kubenswrapper[4745]: I0127 12:47:54.119478 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qld7z" event={"ID":"39b74c1e-98f9-4ad0-9a5c-246abf4d60a4","Type":"ContainerDied","Data":"aaaceffaf7cc9c1c323a58683a04fba45a8481ea2eef609723d02159904f13c7"} Jan 27 12:47:54 crc kubenswrapper[4745]: I0127 12:47:54.119499 4745 scope.go:117] "RemoveContainer" containerID="10cef91a78c3ad4cf9725eff8daf4698da570472210a25d9579d05a21a87927e" Jan 27 12:47:54 crc kubenswrapper[4745]: I0127 12:47:54.159445 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b74c1e-98f9-4ad0-9a5c-246abf4d60a4-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:47:54 crc kubenswrapper[4745]: I0127 12:47:54.159492 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlsvg\" (UniqueName: \"kubernetes.io/projected/39b74c1e-98f9-4ad0-9a5c-246abf4d60a4-kube-api-access-wlsvg\") on node \"crc\" DevicePath \"\"" Jan 27 12:47:54 crc kubenswrapper[4745]: I0127 12:47:54.175034 4745 scope.go:117] "RemoveContainer" containerID="7ed39443f08c64459b0da1ff644cb2b64c93b0acb2a84e545248ab8786bcb06d" Jan 27 12:47:54 crc kubenswrapper[4745]: I0127 12:47:54.216987 4745 scope.go:117] "RemoveContainer" containerID="46ff9ae39e9a821d1b926c4627335efa40734ed7a02c1e0a0b1f2b78f97a201b" Jan 27 12:47:54 crc kubenswrapper[4745]: I0127 12:47:54.253975 4745 scope.go:117] "RemoveContainer" containerID="10cef91a78c3ad4cf9725eff8daf4698da570472210a25d9579d05a21a87927e" Jan 27 12:47:54 crc kubenswrapper[4745]: E0127 12:47:54.257187 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10cef91a78c3ad4cf9725eff8daf4698da570472210a25d9579d05a21a87927e\": container with ID starting with 10cef91a78c3ad4cf9725eff8daf4698da570472210a25d9579d05a21a87927e not found: ID does not exist" containerID="10cef91a78c3ad4cf9725eff8daf4698da570472210a25d9579d05a21a87927e" Jan 27 12:47:54 crc kubenswrapper[4745]: I0127 12:47:54.257229 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10cef91a78c3ad4cf9725eff8daf4698da570472210a25d9579d05a21a87927e"} err="failed to get container status \"10cef91a78c3ad4cf9725eff8daf4698da570472210a25d9579d05a21a87927e\": rpc error: code = NotFound desc = could not find container \"10cef91a78c3ad4cf9725eff8daf4698da570472210a25d9579d05a21a87927e\": container with ID starting with 10cef91a78c3ad4cf9725eff8daf4698da570472210a25d9579d05a21a87927e not found: ID does not exist" Jan 27 12:47:54 crc kubenswrapper[4745]: I0127 12:47:54.257251 4745 scope.go:117] "RemoveContainer" containerID="7ed39443f08c64459b0da1ff644cb2b64c93b0acb2a84e545248ab8786bcb06d" Jan 27 12:47:54 crc kubenswrapper[4745]: E0127 12:47:54.258903 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ed39443f08c64459b0da1ff644cb2b64c93b0acb2a84e545248ab8786bcb06d\": container with ID starting with 7ed39443f08c64459b0da1ff644cb2b64c93b0acb2a84e545248ab8786bcb06d not found: ID does not exist" containerID="7ed39443f08c64459b0da1ff644cb2b64c93b0acb2a84e545248ab8786bcb06d" Jan 27 12:47:54 crc kubenswrapper[4745]: I0127 12:47:54.258934 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ed39443f08c64459b0da1ff644cb2b64c93b0acb2a84e545248ab8786bcb06d"} err="failed to get container status \"7ed39443f08c64459b0da1ff644cb2b64c93b0acb2a84e545248ab8786bcb06d\": rpc error: code = NotFound desc = could not find container \"7ed39443f08c64459b0da1ff644cb2b64c93b0acb2a84e545248ab8786bcb06d\": container with ID starting with 7ed39443f08c64459b0da1ff644cb2b64c93b0acb2a84e545248ab8786bcb06d not found: ID does not exist" Jan 27 12:47:54 crc kubenswrapper[4745]: I0127 12:47:54.258954 4745 scope.go:117] "RemoveContainer" containerID="46ff9ae39e9a821d1b926c4627335efa40734ed7a02c1e0a0b1f2b78f97a201b" Jan 27 12:47:54 crc kubenswrapper[4745]: E0127 12:47:54.262402 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46ff9ae39e9a821d1b926c4627335efa40734ed7a02c1e0a0b1f2b78f97a201b\": container with ID starting with 46ff9ae39e9a821d1b926c4627335efa40734ed7a02c1e0a0b1f2b78f97a201b not found: ID does not exist" containerID="46ff9ae39e9a821d1b926c4627335efa40734ed7a02c1e0a0b1f2b78f97a201b" Jan 27 12:47:54 crc kubenswrapper[4745]: I0127 12:47:54.262457 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46ff9ae39e9a821d1b926c4627335efa40734ed7a02c1e0a0b1f2b78f97a201b"} err="failed to get container status \"46ff9ae39e9a821d1b926c4627335efa40734ed7a02c1e0a0b1f2b78f97a201b\": rpc error: code = NotFound desc = could not find container \"46ff9ae39e9a821d1b926c4627335efa40734ed7a02c1e0a0b1f2b78f97a201b\": container with ID starting with 46ff9ae39e9a821d1b926c4627335efa40734ed7a02c1e0a0b1f2b78f97a201b not found: ID does not exist" Jan 27 12:47:55 crc kubenswrapper[4745]: I0127 12:47:55.656791 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39b74c1e-98f9-4ad0-9a5c-246abf4d60a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39b74c1e-98f9-4ad0-9a5c-246abf4d60a4" (UID: "39b74c1e-98f9-4ad0-9a5c-246abf4d60a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:47:55 crc kubenswrapper[4745]: I0127 12:47:55.678835 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b74c1e-98f9-4ad0-9a5c-246abf4d60a4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:47:55 crc kubenswrapper[4745]: I0127 12:47:55.966105 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qld7z"] Jan 27 12:47:55 crc kubenswrapper[4745]: I0127 12:47:55.980725 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qld7z"] Jan 27 12:47:56 crc kubenswrapper[4745]: I0127 12:47:56.082802 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39b74c1e-98f9-4ad0-9a5c-246abf4d60a4" path="/var/lib/kubelet/pods/39b74c1e-98f9-4ad0-9a5c-246abf4d60a4/volumes" Jan 27 12:48:05 crc kubenswrapper[4745]: I0127 12:48:05.966902 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:48:05 crc kubenswrapper[4745]: I0127 12:48:05.967236 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:48:35 crc kubenswrapper[4745]: I0127 12:48:35.966793 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:48:35 crc kubenswrapper[4745]: I0127 12:48:35.967961 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:49:05 crc kubenswrapper[4745]: I0127 12:49:05.967664 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:49:05 crc kubenswrapper[4745]: I0127 12:49:05.968284 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:49:05 crc kubenswrapper[4745]: I0127 12:49:05.968351 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:49:05 crc kubenswrapper[4745]: I0127 12:49:05.969084 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7326449f06a2e234c2eecde59157e50c93e734223f6ac7d6d6560b60db07b650"} pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 12:49:05 crc kubenswrapper[4745]: I0127 12:49:05.969142 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" containerID="cri-o://7326449f06a2e234c2eecde59157e50c93e734223f6ac7d6d6560b60db07b650" gracePeriod=600 Jan 27 12:49:06 crc kubenswrapper[4745]: I0127 12:49:06.605665 4745 generic.go:334] "Generic (PLEG): container finished" podID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerID="7326449f06a2e234c2eecde59157e50c93e734223f6ac7d6d6560b60db07b650" exitCode=0 Jan 27 12:49:06 crc kubenswrapper[4745]: I0127 12:49:06.605725 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerDied","Data":"7326449f06a2e234c2eecde59157e50c93e734223f6ac7d6d6560b60db07b650"} Jan 27 12:49:06 crc kubenswrapper[4745]: I0127 12:49:06.606030 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerStarted","Data":"ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0"} Jan 27 12:49:06 crc kubenswrapper[4745]: I0127 12:49:06.606051 4745 scope.go:117] "RemoveContainer" containerID="9ae1fc80905a1806aa55437f3c89d618bd9bb4d23d577fb09dd3c85afc3b14b5" Jan 27 12:50:02 crc kubenswrapper[4745]: I0127 12:50:02.440990 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tkq7v"] Jan 27 12:50:02 crc kubenswrapper[4745]: E0127 12:50:02.442024 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b74c1e-98f9-4ad0-9a5c-246abf4d60a4" containerName="extract-utilities" Jan 27 12:50:02 crc kubenswrapper[4745]: I0127 12:50:02.442047 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b74c1e-98f9-4ad0-9a5c-246abf4d60a4" containerName="extract-utilities" Jan 27 12:50:02 crc kubenswrapper[4745]: E0127 12:50:02.442069 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b74c1e-98f9-4ad0-9a5c-246abf4d60a4" containerName="extract-content" Jan 27 12:50:02 crc kubenswrapper[4745]: I0127 12:50:02.442081 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b74c1e-98f9-4ad0-9a5c-246abf4d60a4" containerName="extract-content" Jan 27 12:50:02 crc kubenswrapper[4745]: E0127 12:50:02.442123 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b74c1e-98f9-4ad0-9a5c-246abf4d60a4" containerName="registry-server" Jan 27 12:50:02 crc kubenswrapper[4745]: I0127 12:50:02.442136 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b74c1e-98f9-4ad0-9a5c-246abf4d60a4" containerName="registry-server" Jan 27 12:50:02 crc kubenswrapper[4745]: I0127 12:50:02.442374 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b74c1e-98f9-4ad0-9a5c-246abf4d60a4" containerName="registry-server" Jan 27 12:50:02 crc kubenswrapper[4745]: I0127 12:50:02.444054 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkq7v" Jan 27 12:50:02 crc kubenswrapper[4745]: I0127 12:50:02.455255 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tkq7v"] Jan 27 12:50:02 crc kubenswrapper[4745]: I0127 12:50:02.539366 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b446c5ce-49ad-4044-8998-497200de77b5-utilities\") pod \"community-operators-tkq7v\" (UID: \"b446c5ce-49ad-4044-8998-497200de77b5\") " pod="openshift-marketplace/community-operators-tkq7v" Jan 27 12:50:02 crc kubenswrapper[4745]: I0127 12:50:02.539443 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b446c5ce-49ad-4044-8998-497200de77b5-catalog-content\") pod \"community-operators-tkq7v\" (UID: \"b446c5ce-49ad-4044-8998-497200de77b5\") " pod="openshift-marketplace/community-operators-tkq7v" Jan 27 12:50:02 crc kubenswrapper[4745]: I0127 12:50:02.539513 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlm7b\" (UniqueName: \"kubernetes.io/projected/b446c5ce-49ad-4044-8998-497200de77b5-kube-api-access-vlm7b\") pod \"community-operators-tkq7v\" (UID: \"b446c5ce-49ad-4044-8998-497200de77b5\") " pod="openshift-marketplace/community-operators-tkq7v" Jan 27 12:50:02 crc kubenswrapper[4745]: I0127 12:50:02.640653 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b446c5ce-49ad-4044-8998-497200de77b5-utilities\") pod \"community-operators-tkq7v\" (UID: \"b446c5ce-49ad-4044-8998-497200de77b5\") " pod="openshift-marketplace/community-operators-tkq7v" Jan 27 12:50:02 crc kubenswrapper[4745]: I0127 12:50:02.640710 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b446c5ce-49ad-4044-8998-497200de77b5-catalog-content\") pod \"community-operators-tkq7v\" (UID: \"b446c5ce-49ad-4044-8998-497200de77b5\") " pod="openshift-marketplace/community-operators-tkq7v" Jan 27 12:50:02 crc kubenswrapper[4745]: I0127 12:50:02.640758 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlm7b\" (UniqueName: \"kubernetes.io/projected/b446c5ce-49ad-4044-8998-497200de77b5-kube-api-access-vlm7b\") pod \"community-operators-tkq7v\" (UID: \"b446c5ce-49ad-4044-8998-497200de77b5\") " pod="openshift-marketplace/community-operators-tkq7v" Jan 27 12:50:02 crc kubenswrapper[4745]: I0127 12:50:02.641198 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b446c5ce-49ad-4044-8998-497200de77b5-utilities\") pod \"community-operators-tkq7v\" (UID: \"b446c5ce-49ad-4044-8998-497200de77b5\") " pod="openshift-marketplace/community-operators-tkq7v" Jan 27 12:50:02 crc kubenswrapper[4745]: I0127 12:50:02.641257 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b446c5ce-49ad-4044-8998-497200de77b5-catalog-content\") pod \"community-operators-tkq7v\" (UID: \"b446c5ce-49ad-4044-8998-497200de77b5\") " pod="openshift-marketplace/community-operators-tkq7v" Jan 27 12:50:02 crc kubenswrapper[4745]: I0127 12:50:02.666254 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlm7b\" (UniqueName: \"kubernetes.io/projected/b446c5ce-49ad-4044-8998-497200de77b5-kube-api-access-vlm7b\") pod \"community-operators-tkq7v\" (UID: \"b446c5ce-49ad-4044-8998-497200de77b5\") " pod="openshift-marketplace/community-operators-tkq7v" Jan 27 12:50:02 crc kubenswrapper[4745]: I0127 12:50:02.766958 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkq7v" Jan 27 12:50:03 crc kubenswrapper[4745]: I0127 12:50:03.274863 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tkq7v"] Jan 27 12:50:04 crc kubenswrapper[4745]: I0127 12:50:04.010230 4745 generic.go:334] "Generic (PLEG): container finished" podID="b446c5ce-49ad-4044-8998-497200de77b5" containerID="3a7d50144067f7c53f50378862901e0767ea6367c7f0b5dc82bb4f142c9c3213" exitCode=0 Jan 27 12:50:04 crc kubenswrapper[4745]: I0127 12:50:04.010272 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkq7v" event={"ID":"b446c5ce-49ad-4044-8998-497200de77b5","Type":"ContainerDied","Data":"3a7d50144067f7c53f50378862901e0767ea6367c7f0b5dc82bb4f142c9c3213"} Jan 27 12:50:04 crc kubenswrapper[4745]: I0127 12:50:04.010533 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkq7v" event={"ID":"b446c5ce-49ad-4044-8998-497200de77b5","Type":"ContainerStarted","Data":"e4bf094c19edf52ffcb8f6b366a25d0c7b7f4029f41f2eb62f477d4e1cb2515a"} Jan 27 12:50:05 crc kubenswrapper[4745]: I0127 12:50:05.019208 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkq7v" event={"ID":"b446c5ce-49ad-4044-8998-497200de77b5","Type":"ContainerStarted","Data":"320ec22044c6d626c7aada79a8bf100d9b801699874dcf979f343f62715e7a19"} Jan 27 12:50:06 crc kubenswrapper[4745]: I0127 12:50:06.028804 4745 generic.go:334] "Generic (PLEG): container finished" podID="b446c5ce-49ad-4044-8998-497200de77b5" containerID="320ec22044c6d626c7aada79a8bf100d9b801699874dcf979f343f62715e7a19" exitCode=0 Jan 27 12:50:06 crc kubenswrapper[4745]: I0127 12:50:06.028881 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkq7v" event={"ID":"b446c5ce-49ad-4044-8998-497200de77b5","Type":"ContainerDied","Data":"320ec22044c6d626c7aada79a8bf100d9b801699874dcf979f343f62715e7a19"} Jan 27 12:50:07 crc kubenswrapper[4745]: I0127 12:50:07.045102 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkq7v" event={"ID":"b446c5ce-49ad-4044-8998-497200de77b5","Type":"ContainerStarted","Data":"1d0fc4e85e3927ca91ab0c73ce1af110924455fa8fc46eb4c737821ed0528622"} Jan 27 12:50:07 crc kubenswrapper[4745]: I0127 12:50:07.071895 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tkq7v" podStartSLOduration=2.673222176 podStartE2EDuration="5.071872005s" podCreationTimestamp="2026-01-27 12:50:02 +0000 UTC" firstStartedPulling="2026-01-27 12:50:04.012250832 +0000 UTC m=+2296.817161520" lastFinishedPulling="2026-01-27 12:50:06.410900641 +0000 UTC m=+2299.215811349" observedRunningTime="2026-01-27 12:50:07.067995014 +0000 UTC m=+2299.872905712" watchObservedRunningTime="2026-01-27 12:50:07.071872005 +0000 UTC m=+2299.876782703" Jan 27 12:50:12 crc kubenswrapper[4745]: I0127 12:50:12.767738 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tkq7v" Jan 27 12:50:12 crc kubenswrapper[4745]: I0127 12:50:12.768052 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tkq7v" Jan 27 12:50:12 crc kubenswrapper[4745]: I0127 12:50:12.809149 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tkq7v" Jan 27 12:50:13 crc kubenswrapper[4745]: I0127 12:50:13.142777 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tkq7v" Jan 27 12:50:13 crc kubenswrapper[4745]: I0127 12:50:13.186634 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tkq7v"] Jan 27 12:50:15 crc kubenswrapper[4745]: I0127 12:50:15.097726 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tkq7v" podUID="b446c5ce-49ad-4044-8998-497200de77b5" containerName="registry-server" containerID="cri-o://1d0fc4e85e3927ca91ab0c73ce1af110924455fa8fc46eb4c737821ed0528622" gracePeriod=2 Jan 27 12:50:15 crc kubenswrapper[4745]: I0127 12:50:15.511475 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkq7v" Jan 27 12:50:15 crc kubenswrapper[4745]: I0127 12:50:15.553174 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b446c5ce-49ad-4044-8998-497200de77b5-catalog-content\") pod \"b446c5ce-49ad-4044-8998-497200de77b5\" (UID: \"b446c5ce-49ad-4044-8998-497200de77b5\") " Jan 27 12:50:15 crc kubenswrapper[4745]: I0127 12:50:15.553256 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b446c5ce-49ad-4044-8998-497200de77b5-utilities\") pod \"b446c5ce-49ad-4044-8998-497200de77b5\" (UID: \"b446c5ce-49ad-4044-8998-497200de77b5\") " Jan 27 12:50:15 crc kubenswrapper[4745]: I0127 12:50:15.553371 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlm7b\" (UniqueName: \"kubernetes.io/projected/b446c5ce-49ad-4044-8998-497200de77b5-kube-api-access-vlm7b\") pod \"b446c5ce-49ad-4044-8998-497200de77b5\" (UID: \"b446c5ce-49ad-4044-8998-497200de77b5\") " Jan 27 12:50:15 crc kubenswrapper[4745]: I0127 12:50:15.554511 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b446c5ce-49ad-4044-8998-497200de77b5-utilities" (OuterVolumeSpecName: "utilities") pod "b446c5ce-49ad-4044-8998-497200de77b5" (UID: "b446c5ce-49ad-4044-8998-497200de77b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:50:15 crc kubenswrapper[4745]: I0127 12:50:15.562084 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b446c5ce-49ad-4044-8998-497200de77b5-kube-api-access-vlm7b" (OuterVolumeSpecName: "kube-api-access-vlm7b") pod "b446c5ce-49ad-4044-8998-497200de77b5" (UID: "b446c5ce-49ad-4044-8998-497200de77b5"). InnerVolumeSpecName "kube-api-access-vlm7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:50:15 crc kubenswrapper[4745]: I0127 12:50:15.654794 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b446c5ce-49ad-4044-8998-497200de77b5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:50:15 crc kubenswrapper[4745]: I0127 12:50:15.654860 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlm7b\" (UniqueName: \"kubernetes.io/projected/b446c5ce-49ad-4044-8998-497200de77b5-kube-api-access-vlm7b\") on node \"crc\" DevicePath \"\"" Jan 27 12:50:16 crc kubenswrapper[4745]: I0127 12:50:16.108202 4745 generic.go:334] "Generic (PLEG): container finished" podID="b446c5ce-49ad-4044-8998-497200de77b5" containerID="1d0fc4e85e3927ca91ab0c73ce1af110924455fa8fc46eb4c737821ed0528622" exitCode=0 Jan 27 12:50:16 crc kubenswrapper[4745]: I0127 12:50:16.108256 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkq7v" event={"ID":"b446c5ce-49ad-4044-8998-497200de77b5","Type":"ContainerDied","Data":"1d0fc4e85e3927ca91ab0c73ce1af110924455fa8fc46eb4c737821ed0528622"} Jan 27 12:50:16 crc kubenswrapper[4745]: I0127 12:50:16.108311 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkq7v" Jan 27 12:50:16 crc kubenswrapper[4745]: I0127 12:50:16.108327 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkq7v" event={"ID":"b446c5ce-49ad-4044-8998-497200de77b5","Type":"ContainerDied","Data":"e4bf094c19edf52ffcb8f6b366a25d0c7b7f4029f41f2eb62f477d4e1cb2515a"} Jan 27 12:50:16 crc kubenswrapper[4745]: I0127 12:50:16.108350 4745 scope.go:117] "RemoveContainer" containerID="1d0fc4e85e3927ca91ab0c73ce1af110924455fa8fc46eb4c737821ed0528622" Jan 27 12:50:16 crc kubenswrapper[4745]: I0127 12:50:16.130590 4745 scope.go:117] "RemoveContainer" containerID="320ec22044c6d626c7aada79a8bf100d9b801699874dcf979f343f62715e7a19" Jan 27 12:50:16 crc kubenswrapper[4745]: I0127 12:50:16.149041 4745 scope.go:117] "RemoveContainer" containerID="3a7d50144067f7c53f50378862901e0767ea6367c7f0b5dc82bb4f142c9c3213" Jan 27 12:50:16 crc kubenswrapper[4745]: I0127 12:50:16.177012 4745 scope.go:117] "RemoveContainer" containerID="1d0fc4e85e3927ca91ab0c73ce1af110924455fa8fc46eb4c737821ed0528622" Jan 27 12:50:16 crc kubenswrapper[4745]: E0127 12:50:16.177390 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d0fc4e85e3927ca91ab0c73ce1af110924455fa8fc46eb4c737821ed0528622\": container with ID starting with 1d0fc4e85e3927ca91ab0c73ce1af110924455fa8fc46eb4c737821ed0528622 not found: ID does not exist" containerID="1d0fc4e85e3927ca91ab0c73ce1af110924455fa8fc46eb4c737821ed0528622" Jan 27 12:50:16 crc kubenswrapper[4745]: I0127 12:50:16.177431 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d0fc4e85e3927ca91ab0c73ce1af110924455fa8fc46eb4c737821ed0528622"} err="failed to get container status \"1d0fc4e85e3927ca91ab0c73ce1af110924455fa8fc46eb4c737821ed0528622\": rpc error: code = NotFound desc = could not find container \"1d0fc4e85e3927ca91ab0c73ce1af110924455fa8fc46eb4c737821ed0528622\": container with ID starting with 1d0fc4e85e3927ca91ab0c73ce1af110924455fa8fc46eb4c737821ed0528622 not found: ID does not exist" Jan 27 12:50:16 crc kubenswrapper[4745]: I0127 12:50:16.177457 4745 scope.go:117] "RemoveContainer" containerID="320ec22044c6d626c7aada79a8bf100d9b801699874dcf979f343f62715e7a19" Jan 27 12:50:16 crc kubenswrapper[4745]: E0127 12:50:16.177727 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"320ec22044c6d626c7aada79a8bf100d9b801699874dcf979f343f62715e7a19\": container with ID starting with 320ec22044c6d626c7aada79a8bf100d9b801699874dcf979f343f62715e7a19 not found: ID does not exist" containerID="320ec22044c6d626c7aada79a8bf100d9b801699874dcf979f343f62715e7a19" Jan 27 12:50:16 crc kubenswrapper[4745]: I0127 12:50:16.177769 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"320ec22044c6d626c7aada79a8bf100d9b801699874dcf979f343f62715e7a19"} err="failed to get container status \"320ec22044c6d626c7aada79a8bf100d9b801699874dcf979f343f62715e7a19\": rpc error: code = NotFound desc = could not find container \"320ec22044c6d626c7aada79a8bf100d9b801699874dcf979f343f62715e7a19\": container with ID starting with 320ec22044c6d626c7aada79a8bf100d9b801699874dcf979f343f62715e7a19 not found: ID does not exist" Jan 27 12:50:16 crc kubenswrapper[4745]: I0127 12:50:16.177790 4745 scope.go:117] "RemoveContainer" containerID="3a7d50144067f7c53f50378862901e0767ea6367c7f0b5dc82bb4f142c9c3213" Jan 27 12:50:16 crc kubenswrapper[4745]: E0127 12:50:16.178265 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a7d50144067f7c53f50378862901e0767ea6367c7f0b5dc82bb4f142c9c3213\": container with ID starting with 3a7d50144067f7c53f50378862901e0767ea6367c7f0b5dc82bb4f142c9c3213 not found: ID does not exist" containerID="3a7d50144067f7c53f50378862901e0767ea6367c7f0b5dc82bb4f142c9c3213" Jan 27 12:50:16 crc kubenswrapper[4745]: I0127 12:50:16.178302 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a7d50144067f7c53f50378862901e0767ea6367c7f0b5dc82bb4f142c9c3213"} err="failed to get container status \"3a7d50144067f7c53f50378862901e0767ea6367c7f0b5dc82bb4f142c9c3213\": rpc error: code = NotFound desc = could not find container \"3a7d50144067f7c53f50378862901e0767ea6367c7f0b5dc82bb4f142c9c3213\": container with ID starting with 3a7d50144067f7c53f50378862901e0767ea6367c7f0b5dc82bb4f142c9c3213 not found: ID does not exist" Jan 27 12:50:16 crc kubenswrapper[4745]: I0127 12:50:16.587437 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b446c5ce-49ad-4044-8998-497200de77b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b446c5ce-49ad-4044-8998-497200de77b5" (UID: "b446c5ce-49ad-4044-8998-497200de77b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:50:16 crc kubenswrapper[4745]: I0127 12:50:16.667436 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b446c5ce-49ad-4044-8998-497200de77b5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:50:16 crc kubenswrapper[4745]: I0127 12:50:16.788102 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tkq7v"] Jan 27 12:50:16 crc kubenswrapper[4745]: I0127 12:50:16.793231 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tkq7v"] Jan 27 12:50:18 crc kubenswrapper[4745]: I0127 12:50:18.083360 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b446c5ce-49ad-4044-8998-497200de77b5" path="/var/lib/kubelet/pods/b446c5ce-49ad-4044-8998-497200de77b5/volumes" Jan 27 12:50:29 crc kubenswrapper[4745]: I0127 12:50:29.957286 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qvpnp"] Jan 27 12:50:29 crc kubenswrapper[4745]: E0127 12:50:29.958145 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b446c5ce-49ad-4044-8998-497200de77b5" containerName="extract-content" Jan 27 12:50:29 crc kubenswrapper[4745]: I0127 12:50:29.958162 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b446c5ce-49ad-4044-8998-497200de77b5" containerName="extract-content" Jan 27 12:50:29 crc kubenswrapper[4745]: E0127 12:50:29.958179 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b446c5ce-49ad-4044-8998-497200de77b5" containerName="registry-server" Jan 27 12:50:29 crc kubenswrapper[4745]: I0127 12:50:29.958188 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b446c5ce-49ad-4044-8998-497200de77b5" containerName="registry-server" Jan 27 12:50:29 crc kubenswrapper[4745]: E0127 12:50:29.958202 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b446c5ce-49ad-4044-8998-497200de77b5" containerName="extract-utilities" Jan 27 12:50:29 crc kubenswrapper[4745]: I0127 12:50:29.958209 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b446c5ce-49ad-4044-8998-497200de77b5" containerName="extract-utilities" Jan 27 12:50:29 crc kubenswrapper[4745]: I0127 12:50:29.958377 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="b446c5ce-49ad-4044-8998-497200de77b5" containerName="registry-server" Jan 27 12:50:29 crc kubenswrapper[4745]: I0127 12:50:29.959394 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvpnp" Jan 27 12:50:29 crc kubenswrapper[4745]: I0127 12:50:29.971491 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvpnp"] Jan 27 12:50:30 crc kubenswrapper[4745]: I0127 12:50:30.152985 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb68b3af-a8a6-43be-95ff-ba6ef88c4867-catalog-content\") pod \"redhat-marketplace-qvpnp\" (UID: \"eb68b3af-a8a6-43be-95ff-ba6ef88c4867\") " pod="openshift-marketplace/redhat-marketplace-qvpnp" Jan 27 12:50:30 crc kubenswrapper[4745]: I0127 12:50:30.153092 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb68b3af-a8a6-43be-95ff-ba6ef88c4867-utilities\") pod \"redhat-marketplace-qvpnp\" (UID: \"eb68b3af-a8a6-43be-95ff-ba6ef88c4867\") " pod="openshift-marketplace/redhat-marketplace-qvpnp" Jan 27 12:50:30 crc kubenswrapper[4745]: I0127 12:50:30.153161 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krlls\" (UniqueName: \"kubernetes.io/projected/eb68b3af-a8a6-43be-95ff-ba6ef88c4867-kube-api-access-krlls\") pod \"redhat-marketplace-qvpnp\" (UID: \"eb68b3af-a8a6-43be-95ff-ba6ef88c4867\") " pod="openshift-marketplace/redhat-marketplace-qvpnp" Jan 27 12:50:30 crc kubenswrapper[4745]: I0127 12:50:30.254917 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb68b3af-a8a6-43be-95ff-ba6ef88c4867-catalog-content\") pod \"redhat-marketplace-qvpnp\" (UID: \"eb68b3af-a8a6-43be-95ff-ba6ef88c4867\") " pod="openshift-marketplace/redhat-marketplace-qvpnp" Jan 27 12:50:30 crc kubenswrapper[4745]: I0127 12:50:30.255272 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb68b3af-a8a6-43be-95ff-ba6ef88c4867-utilities\") pod \"redhat-marketplace-qvpnp\" (UID: \"eb68b3af-a8a6-43be-95ff-ba6ef88c4867\") " pod="openshift-marketplace/redhat-marketplace-qvpnp" Jan 27 12:50:30 crc kubenswrapper[4745]: I0127 12:50:30.255473 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krlls\" (UniqueName: \"kubernetes.io/projected/eb68b3af-a8a6-43be-95ff-ba6ef88c4867-kube-api-access-krlls\") pod \"redhat-marketplace-qvpnp\" (UID: \"eb68b3af-a8a6-43be-95ff-ba6ef88c4867\") " pod="openshift-marketplace/redhat-marketplace-qvpnp" Jan 27 12:50:30 crc kubenswrapper[4745]: I0127 12:50:30.255515 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb68b3af-a8a6-43be-95ff-ba6ef88c4867-catalog-content\") pod \"redhat-marketplace-qvpnp\" (UID: \"eb68b3af-a8a6-43be-95ff-ba6ef88c4867\") " pod="openshift-marketplace/redhat-marketplace-qvpnp" Jan 27 12:50:30 crc kubenswrapper[4745]: I0127 12:50:30.255848 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb68b3af-a8a6-43be-95ff-ba6ef88c4867-utilities\") pod \"redhat-marketplace-qvpnp\" (UID: \"eb68b3af-a8a6-43be-95ff-ba6ef88c4867\") " pod="openshift-marketplace/redhat-marketplace-qvpnp" Jan 27 12:50:30 crc kubenswrapper[4745]: I0127 12:50:30.293486 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krlls\" (UniqueName: \"kubernetes.io/projected/eb68b3af-a8a6-43be-95ff-ba6ef88c4867-kube-api-access-krlls\") pod \"redhat-marketplace-qvpnp\" (UID: \"eb68b3af-a8a6-43be-95ff-ba6ef88c4867\") " pod="openshift-marketplace/redhat-marketplace-qvpnp" Jan 27 12:50:30 crc kubenswrapper[4745]: I0127 12:50:30.293801 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvpnp" Jan 27 12:50:30 crc kubenswrapper[4745]: I0127 12:50:30.743355 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvpnp"] Jan 27 12:50:31 crc kubenswrapper[4745]: I0127 12:50:31.222470 4745 generic.go:334] "Generic (PLEG): container finished" podID="eb68b3af-a8a6-43be-95ff-ba6ef88c4867" containerID="42ad897c68258058b0e3b4d1b166c7f38d7d7c9a1e5180d9ac306bfe9feca15f" exitCode=0 Jan 27 12:50:31 crc kubenswrapper[4745]: I0127 12:50:31.222524 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvpnp" event={"ID":"eb68b3af-a8a6-43be-95ff-ba6ef88c4867","Type":"ContainerDied","Data":"42ad897c68258058b0e3b4d1b166c7f38d7d7c9a1e5180d9ac306bfe9feca15f"} Jan 27 12:50:31 crc kubenswrapper[4745]: I0127 12:50:31.222768 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvpnp" event={"ID":"eb68b3af-a8a6-43be-95ff-ba6ef88c4867","Type":"ContainerStarted","Data":"90b2fd17b12edaebd52954bf073032e2c02c587f571b33f3e5231deaa14ffff0"} Jan 27 12:50:33 crc kubenswrapper[4745]: I0127 12:50:33.237447 4745 generic.go:334] "Generic (PLEG): container finished" podID="eb68b3af-a8a6-43be-95ff-ba6ef88c4867" containerID="5cd844f683e42ca12ee3696779c72650240dca1bbd852236fb8fc928c3a7bfa5" exitCode=0 Jan 27 12:50:33 crc kubenswrapper[4745]: I0127 12:50:33.238426 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvpnp" event={"ID":"eb68b3af-a8a6-43be-95ff-ba6ef88c4867","Type":"ContainerDied","Data":"5cd844f683e42ca12ee3696779c72650240dca1bbd852236fb8fc928c3a7bfa5"} Jan 27 12:50:34 crc kubenswrapper[4745]: I0127 12:50:34.247864 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvpnp" event={"ID":"eb68b3af-a8a6-43be-95ff-ba6ef88c4867","Type":"ContainerStarted","Data":"59b12c745e3b9e3150bacc1206db91ccf7d42288ca48fde5a885af46d6f70548"} Jan 27 12:50:34 crc kubenswrapper[4745]: I0127 12:50:34.277309 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qvpnp" podStartSLOduration=2.458997682 podStartE2EDuration="5.277290633s" podCreationTimestamp="2026-01-27 12:50:29 +0000 UTC" firstStartedPulling="2026-01-27 12:50:31.224649068 +0000 UTC m=+2324.029559756" lastFinishedPulling="2026-01-27 12:50:34.042942019 +0000 UTC m=+2326.847852707" observedRunningTime="2026-01-27 12:50:34.269157359 +0000 UTC m=+2327.074068057" watchObservedRunningTime="2026-01-27 12:50:34.277290633 +0000 UTC m=+2327.082201341" Jan 27 12:50:34 crc kubenswrapper[4745]: I0127 12:50:34.558123 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g62dp"] Jan 27 12:50:34 crc kubenswrapper[4745]: I0127 12:50:34.559586 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g62dp" Jan 27 12:50:34 crc kubenswrapper[4745]: I0127 12:50:34.577256 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g62dp"] Jan 27 12:50:34 crc kubenswrapper[4745]: I0127 12:50:34.617682 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af703be5-942b-4093-9b57-e26620431af9-utilities\") pod \"certified-operators-g62dp\" (UID: \"af703be5-942b-4093-9b57-e26620431af9\") " pod="openshift-marketplace/certified-operators-g62dp" Jan 27 12:50:34 crc kubenswrapper[4745]: I0127 12:50:34.617803 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r5qp\" (UniqueName: \"kubernetes.io/projected/af703be5-942b-4093-9b57-e26620431af9-kube-api-access-4r5qp\") pod \"certified-operators-g62dp\" (UID: \"af703be5-942b-4093-9b57-e26620431af9\") " pod="openshift-marketplace/certified-operators-g62dp" Jan 27 12:50:34 crc kubenswrapper[4745]: I0127 12:50:34.617904 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af703be5-942b-4093-9b57-e26620431af9-catalog-content\") pod \"certified-operators-g62dp\" (UID: \"af703be5-942b-4093-9b57-e26620431af9\") " pod="openshift-marketplace/certified-operators-g62dp" Jan 27 12:50:34 crc kubenswrapper[4745]: I0127 12:50:34.719269 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af703be5-942b-4093-9b57-e26620431af9-utilities\") pod \"certified-operators-g62dp\" (UID: \"af703be5-942b-4093-9b57-e26620431af9\") " pod="openshift-marketplace/certified-operators-g62dp" Jan 27 12:50:34 crc kubenswrapper[4745]: I0127 12:50:34.719353 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r5qp\" (UniqueName: \"kubernetes.io/projected/af703be5-942b-4093-9b57-e26620431af9-kube-api-access-4r5qp\") pod \"certified-operators-g62dp\" (UID: \"af703be5-942b-4093-9b57-e26620431af9\") " pod="openshift-marketplace/certified-operators-g62dp" Jan 27 12:50:34 crc kubenswrapper[4745]: I0127 12:50:34.719419 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af703be5-942b-4093-9b57-e26620431af9-catalog-content\") pod \"certified-operators-g62dp\" (UID: \"af703be5-942b-4093-9b57-e26620431af9\") " pod="openshift-marketplace/certified-operators-g62dp" Jan 27 12:50:34 crc kubenswrapper[4745]: I0127 12:50:34.720033 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af703be5-942b-4093-9b57-e26620431af9-utilities\") pod \"certified-operators-g62dp\" (UID: \"af703be5-942b-4093-9b57-e26620431af9\") " pod="openshift-marketplace/certified-operators-g62dp" Jan 27 12:50:34 crc kubenswrapper[4745]: I0127 12:50:34.720117 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af703be5-942b-4093-9b57-e26620431af9-catalog-content\") pod \"certified-operators-g62dp\" (UID: \"af703be5-942b-4093-9b57-e26620431af9\") " pod="openshift-marketplace/certified-operators-g62dp" Jan 27 12:50:34 crc kubenswrapper[4745]: I0127 12:50:34.744870 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r5qp\" (UniqueName: \"kubernetes.io/projected/af703be5-942b-4093-9b57-e26620431af9-kube-api-access-4r5qp\") pod \"certified-operators-g62dp\" (UID: \"af703be5-942b-4093-9b57-e26620431af9\") " pod="openshift-marketplace/certified-operators-g62dp" Jan 27 12:50:34 crc kubenswrapper[4745]: I0127 12:50:34.888537 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g62dp" Jan 27 12:50:35 crc kubenswrapper[4745]: I0127 12:50:35.385940 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g62dp"] Jan 27 12:50:36 crc kubenswrapper[4745]: I0127 12:50:36.261661 4745 generic.go:334] "Generic (PLEG): container finished" podID="af703be5-942b-4093-9b57-e26620431af9" containerID="9d8cf455beb8b33f9edaf8ae9d0f3c9190b2b52cc2ca69fa278889feae65f794" exitCode=0 Jan 27 12:50:36 crc kubenswrapper[4745]: I0127 12:50:36.261699 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g62dp" event={"ID":"af703be5-942b-4093-9b57-e26620431af9","Type":"ContainerDied","Data":"9d8cf455beb8b33f9edaf8ae9d0f3c9190b2b52cc2ca69fa278889feae65f794"} Jan 27 12:50:36 crc kubenswrapper[4745]: I0127 12:50:36.261726 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g62dp" event={"ID":"af703be5-942b-4093-9b57-e26620431af9","Type":"ContainerStarted","Data":"9e4c82b3e6ab0525001679f48a7356db8eff5211bc959a091c4844c307abb12c"} Jan 27 12:50:38 crc kubenswrapper[4745]: I0127 12:50:38.279946 4745 generic.go:334] "Generic (PLEG): container finished" podID="af703be5-942b-4093-9b57-e26620431af9" containerID="605827a4686cdd1bb4f1655c1618cabc5f9afa3c7893e7344bc8b98926d5ca82" exitCode=0 Jan 27 12:50:38 crc kubenswrapper[4745]: I0127 12:50:38.280150 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g62dp" event={"ID":"af703be5-942b-4093-9b57-e26620431af9","Type":"ContainerDied","Data":"605827a4686cdd1bb4f1655c1618cabc5f9afa3c7893e7344bc8b98926d5ca82"} Jan 27 12:50:39 crc kubenswrapper[4745]: I0127 12:50:39.289899 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g62dp" event={"ID":"af703be5-942b-4093-9b57-e26620431af9","Type":"ContainerStarted","Data":"e183eec9dcb5e3381e23977e1115de9eb27e025aa882316d91f97319152ccaa6"} Jan 27 12:50:39 crc kubenswrapper[4745]: I0127 12:50:39.309150 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g62dp" podStartSLOduration=2.89820078 podStartE2EDuration="5.309128442s" podCreationTimestamp="2026-01-27 12:50:34 +0000 UTC" firstStartedPulling="2026-01-27 12:50:36.263025545 +0000 UTC m=+2329.067936233" lastFinishedPulling="2026-01-27 12:50:38.673953207 +0000 UTC m=+2331.478863895" observedRunningTime="2026-01-27 12:50:39.309016299 +0000 UTC m=+2332.113927007" watchObservedRunningTime="2026-01-27 12:50:39.309128442 +0000 UTC m=+2332.114039130" Jan 27 12:50:40 crc kubenswrapper[4745]: I0127 12:50:40.294332 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qvpnp" Jan 27 12:50:40 crc kubenswrapper[4745]: I0127 12:50:40.294393 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qvpnp" Jan 27 12:50:40 crc kubenswrapper[4745]: I0127 12:50:40.331514 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qvpnp" Jan 27 12:50:41 crc kubenswrapper[4745]: I0127 12:50:41.387150 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qvpnp" Jan 27 12:50:41 crc kubenswrapper[4745]: I0127 12:50:41.941368 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvpnp"] Jan 27 12:50:43 crc kubenswrapper[4745]: I0127 12:50:43.317719 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qvpnp" podUID="eb68b3af-a8a6-43be-95ff-ba6ef88c4867" containerName="registry-server" containerID="cri-o://59b12c745e3b9e3150bacc1206db91ccf7d42288ca48fde5a885af46d6f70548" gracePeriod=2 Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.217442 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvpnp" Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.294520 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb68b3af-a8a6-43be-95ff-ba6ef88c4867-catalog-content\") pod \"eb68b3af-a8a6-43be-95ff-ba6ef88c4867\" (UID: \"eb68b3af-a8a6-43be-95ff-ba6ef88c4867\") " Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.294675 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krlls\" (UniqueName: \"kubernetes.io/projected/eb68b3af-a8a6-43be-95ff-ba6ef88c4867-kube-api-access-krlls\") pod \"eb68b3af-a8a6-43be-95ff-ba6ef88c4867\" (UID: \"eb68b3af-a8a6-43be-95ff-ba6ef88c4867\") " Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.294865 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb68b3af-a8a6-43be-95ff-ba6ef88c4867-utilities\") pod \"eb68b3af-a8a6-43be-95ff-ba6ef88c4867\" (UID: \"eb68b3af-a8a6-43be-95ff-ba6ef88c4867\") " Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.295659 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb68b3af-a8a6-43be-95ff-ba6ef88c4867-utilities" (OuterVolumeSpecName: "utilities") pod "eb68b3af-a8a6-43be-95ff-ba6ef88c4867" (UID: "eb68b3af-a8a6-43be-95ff-ba6ef88c4867"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.301708 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb68b3af-a8a6-43be-95ff-ba6ef88c4867-kube-api-access-krlls" (OuterVolumeSpecName: "kube-api-access-krlls") pod "eb68b3af-a8a6-43be-95ff-ba6ef88c4867" (UID: "eb68b3af-a8a6-43be-95ff-ba6ef88c4867"). InnerVolumeSpecName "kube-api-access-krlls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.320460 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb68b3af-a8a6-43be-95ff-ba6ef88c4867-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb68b3af-a8a6-43be-95ff-ba6ef88c4867" (UID: "eb68b3af-a8a6-43be-95ff-ba6ef88c4867"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.328255 4745 generic.go:334] "Generic (PLEG): container finished" podID="eb68b3af-a8a6-43be-95ff-ba6ef88c4867" containerID="59b12c745e3b9e3150bacc1206db91ccf7d42288ca48fde5a885af46d6f70548" exitCode=0 Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.328335 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvpnp" Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.328327 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvpnp" event={"ID":"eb68b3af-a8a6-43be-95ff-ba6ef88c4867","Type":"ContainerDied","Data":"59b12c745e3b9e3150bacc1206db91ccf7d42288ca48fde5a885af46d6f70548"} Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.328403 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvpnp" event={"ID":"eb68b3af-a8a6-43be-95ff-ba6ef88c4867","Type":"ContainerDied","Data":"90b2fd17b12edaebd52954bf073032e2c02c587f571b33f3e5231deaa14ffff0"} Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.328424 4745 scope.go:117] "RemoveContainer" containerID="59b12c745e3b9e3150bacc1206db91ccf7d42288ca48fde5a885af46d6f70548" Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.344029 4745 scope.go:117] "RemoveContainer" containerID="5cd844f683e42ca12ee3696779c72650240dca1bbd852236fb8fc928c3a7bfa5" Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.365889 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvpnp"] Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.366045 4745 scope.go:117] "RemoveContainer" containerID="42ad897c68258058b0e3b4d1b166c7f38d7d7c9a1e5180d9ac306bfe9feca15f" Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.374928 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvpnp"] Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.396654 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb68b3af-a8a6-43be-95ff-ba6ef88c4867-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.396687 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb68b3af-a8a6-43be-95ff-ba6ef88c4867-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.396696 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krlls\" (UniqueName: \"kubernetes.io/projected/eb68b3af-a8a6-43be-95ff-ba6ef88c4867-kube-api-access-krlls\") on node \"crc\" DevicePath \"\"" Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.415340 4745 scope.go:117] "RemoveContainer" containerID="59b12c745e3b9e3150bacc1206db91ccf7d42288ca48fde5a885af46d6f70548" Jan 27 12:50:44 crc kubenswrapper[4745]: E0127 12:50:44.416092 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59b12c745e3b9e3150bacc1206db91ccf7d42288ca48fde5a885af46d6f70548\": container with ID starting with 59b12c745e3b9e3150bacc1206db91ccf7d42288ca48fde5a885af46d6f70548 not found: ID does not exist" containerID="59b12c745e3b9e3150bacc1206db91ccf7d42288ca48fde5a885af46d6f70548" Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.416167 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59b12c745e3b9e3150bacc1206db91ccf7d42288ca48fde5a885af46d6f70548"} err="failed to get container status \"59b12c745e3b9e3150bacc1206db91ccf7d42288ca48fde5a885af46d6f70548\": rpc error: code = NotFound desc = could not find container \"59b12c745e3b9e3150bacc1206db91ccf7d42288ca48fde5a885af46d6f70548\": container with ID starting with 59b12c745e3b9e3150bacc1206db91ccf7d42288ca48fde5a885af46d6f70548 not found: ID does not exist" Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.416221 4745 scope.go:117] "RemoveContainer" containerID="5cd844f683e42ca12ee3696779c72650240dca1bbd852236fb8fc928c3a7bfa5" Jan 27 12:50:44 crc kubenswrapper[4745]: E0127 12:50:44.416764 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cd844f683e42ca12ee3696779c72650240dca1bbd852236fb8fc928c3a7bfa5\": container with ID starting with 5cd844f683e42ca12ee3696779c72650240dca1bbd852236fb8fc928c3a7bfa5 not found: ID does not exist" containerID="5cd844f683e42ca12ee3696779c72650240dca1bbd852236fb8fc928c3a7bfa5" Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.416838 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cd844f683e42ca12ee3696779c72650240dca1bbd852236fb8fc928c3a7bfa5"} err="failed to get container status \"5cd844f683e42ca12ee3696779c72650240dca1bbd852236fb8fc928c3a7bfa5\": rpc error: code = NotFound desc = could not find container \"5cd844f683e42ca12ee3696779c72650240dca1bbd852236fb8fc928c3a7bfa5\": container with ID starting with 5cd844f683e42ca12ee3696779c72650240dca1bbd852236fb8fc928c3a7bfa5 not found: ID does not exist" Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.416876 4745 scope.go:117] "RemoveContainer" containerID="42ad897c68258058b0e3b4d1b166c7f38d7d7c9a1e5180d9ac306bfe9feca15f" Jan 27 12:50:44 crc kubenswrapper[4745]: E0127 12:50:44.417447 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42ad897c68258058b0e3b4d1b166c7f38d7d7c9a1e5180d9ac306bfe9feca15f\": container with ID starting with 42ad897c68258058b0e3b4d1b166c7f38d7d7c9a1e5180d9ac306bfe9feca15f not found: ID does not exist" containerID="42ad897c68258058b0e3b4d1b166c7f38d7d7c9a1e5180d9ac306bfe9feca15f" Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.417476 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42ad897c68258058b0e3b4d1b166c7f38d7d7c9a1e5180d9ac306bfe9feca15f"} err="failed to get container status \"42ad897c68258058b0e3b4d1b166c7f38d7d7c9a1e5180d9ac306bfe9feca15f\": rpc error: code = NotFound desc = could not find container \"42ad897c68258058b0e3b4d1b166c7f38d7d7c9a1e5180d9ac306bfe9feca15f\": container with ID starting with 42ad897c68258058b0e3b4d1b166c7f38d7d7c9a1e5180d9ac306bfe9feca15f not found: ID does not exist" Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.890548 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g62dp" Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.890668 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g62dp" Jan 27 12:50:44 crc kubenswrapper[4745]: I0127 12:50:44.940632 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g62dp" Jan 27 12:50:45 crc kubenswrapper[4745]: I0127 12:50:45.374040 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g62dp" Jan 27 12:50:46 crc kubenswrapper[4745]: I0127 12:50:46.084193 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb68b3af-a8a6-43be-95ff-ba6ef88c4867" path="/var/lib/kubelet/pods/eb68b3af-a8a6-43be-95ff-ba6ef88c4867/volumes" Jan 27 12:50:47 crc kubenswrapper[4745]: I0127 12:50:47.338242 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g62dp"] Jan 27 12:50:47 crc kubenswrapper[4745]: I0127 12:50:47.349941 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g62dp" podUID="af703be5-942b-4093-9b57-e26620431af9" containerName="registry-server" containerID="cri-o://e183eec9dcb5e3381e23977e1115de9eb27e025aa882316d91f97319152ccaa6" gracePeriod=2 Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.213849 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g62dp" Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.351532 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r5qp\" (UniqueName: \"kubernetes.io/projected/af703be5-942b-4093-9b57-e26620431af9-kube-api-access-4r5qp\") pod \"af703be5-942b-4093-9b57-e26620431af9\" (UID: \"af703be5-942b-4093-9b57-e26620431af9\") " Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.351670 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af703be5-942b-4093-9b57-e26620431af9-utilities\") pod \"af703be5-942b-4093-9b57-e26620431af9\" (UID: \"af703be5-942b-4093-9b57-e26620431af9\") " Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.351783 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af703be5-942b-4093-9b57-e26620431af9-catalog-content\") pod \"af703be5-942b-4093-9b57-e26620431af9\" (UID: \"af703be5-942b-4093-9b57-e26620431af9\") " Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.353358 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af703be5-942b-4093-9b57-e26620431af9-utilities" (OuterVolumeSpecName: "utilities") pod "af703be5-942b-4093-9b57-e26620431af9" (UID: "af703be5-942b-4093-9b57-e26620431af9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.360564 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af703be5-942b-4093-9b57-e26620431af9-kube-api-access-4r5qp" (OuterVolumeSpecName: "kube-api-access-4r5qp") pod "af703be5-942b-4093-9b57-e26620431af9" (UID: "af703be5-942b-4093-9b57-e26620431af9"). InnerVolumeSpecName "kube-api-access-4r5qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.362921 4745 generic.go:334] "Generic (PLEG): container finished" podID="af703be5-942b-4093-9b57-e26620431af9" containerID="e183eec9dcb5e3381e23977e1115de9eb27e025aa882316d91f97319152ccaa6" exitCode=0 Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.362964 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g62dp" event={"ID":"af703be5-942b-4093-9b57-e26620431af9","Type":"ContainerDied","Data":"e183eec9dcb5e3381e23977e1115de9eb27e025aa882316d91f97319152ccaa6"} Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.362991 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g62dp" event={"ID":"af703be5-942b-4093-9b57-e26620431af9","Type":"ContainerDied","Data":"9e4c82b3e6ab0525001679f48a7356db8eff5211bc959a091c4844c307abb12c"} Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.363012 4745 scope.go:117] "RemoveContainer" containerID="e183eec9dcb5e3381e23977e1115de9eb27e025aa882316d91f97319152ccaa6" Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.363240 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g62dp" Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.400126 4745 scope.go:117] "RemoveContainer" containerID="605827a4686cdd1bb4f1655c1618cabc5f9afa3c7893e7344bc8b98926d5ca82" Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.420615 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af703be5-942b-4093-9b57-e26620431af9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af703be5-942b-4093-9b57-e26620431af9" (UID: "af703be5-942b-4093-9b57-e26620431af9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.423011 4745 scope.go:117] "RemoveContainer" containerID="9d8cf455beb8b33f9edaf8ae9d0f3c9190b2b52cc2ca69fa278889feae65f794" Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.448526 4745 scope.go:117] "RemoveContainer" containerID="e183eec9dcb5e3381e23977e1115de9eb27e025aa882316d91f97319152ccaa6" Jan 27 12:50:48 crc kubenswrapper[4745]: E0127 12:50:48.449112 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e183eec9dcb5e3381e23977e1115de9eb27e025aa882316d91f97319152ccaa6\": container with ID starting with e183eec9dcb5e3381e23977e1115de9eb27e025aa882316d91f97319152ccaa6 not found: ID does not exist" containerID="e183eec9dcb5e3381e23977e1115de9eb27e025aa882316d91f97319152ccaa6" Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.449203 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e183eec9dcb5e3381e23977e1115de9eb27e025aa882316d91f97319152ccaa6"} err="failed to get container status \"e183eec9dcb5e3381e23977e1115de9eb27e025aa882316d91f97319152ccaa6\": rpc error: code = NotFound desc = could not find container \"e183eec9dcb5e3381e23977e1115de9eb27e025aa882316d91f97319152ccaa6\": container with ID starting with e183eec9dcb5e3381e23977e1115de9eb27e025aa882316d91f97319152ccaa6 not found: ID does not exist" Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.449238 4745 scope.go:117] "RemoveContainer" containerID="605827a4686cdd1bb4f1655c1618cabc5f9afa3c7893e7344bc8b98926d5ca82" Jan 27 12:50:48 crc kubenswrapper[4745]: E0127 12:50:48.449701 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"605827a4686cdd1bb4f1655c1618cabc5f9afa3c7893e7344bc8b98926d5ca82\": container with ID starting with 605827a4686cdd1bb4f1655c1618cabc5f9afa3c7893e7344bc8b98926d5ca82 not found: ID does not exist" containerID="605827a4686cdd1bb4f1655c1618cabc5f9afa3c7893e7344bc8b98926d5ca82" Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.449753 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"605827a4686cdd1bb4f1655c1618cabc5f9afa3c7893e7344bc8b98926d5ca82"} err="failed to get container status \"605827a4686cdd1bb4f1655c1618cabc5f9afa3c7893e7344bc8b98926d5ca82\": rpc error: code = NotFound desc = could not find container \"605827a4686cdd1bb4f1655c1618cabc5f9afa3c7893e7344bc8b98926d5ca82\": container with ID starting with 605827a4686cdd1bb4f1655c1618cabc5f9afa3c7893e7344bc8b98926d5ca82 not found: ID does not exist" Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.449782 4745 scope.go:117] "RemoveContainer" containerID="9d8cf455beb8b33f9edaf8ae9d0f3c9190b2b52cc2ca69fa278889feae65f794" Jan 27 12:50:48 crc kubenswrapper[4745]: E0127 12:50:48.450120 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d8cf455beb8b33f9edaf8ae9d0f3c9190b2b52cc2ca69fa278889feae65f794\": container with ID starting with 9d8cf455beb8b33f9edaf8ae9d0f3c9190b2b52cc2ca69fa278889feae65f794 not found: ID does not exist" containerID="9d8cf455beb8b33f9edaf8ae9d0f3c9190b2b52cc2ca69fa278889feae65f794" Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.450161 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d8cf455beb8b33f9edaf8ae9d0f3c9190b2b52cc2ca69fa278889feae65f794"} err="failed to get container status \"9d8cf455beb8b33f9edaf8ae9d0f3c9190b2b52cc2ca69fa278889feae65f794\": rpc error: code = NotFound desc = could not find container \"9d8cf455beb8b33f9edaf8ae9d0f3c9190b2b52cc2ca69fa278889feae65f794\": container with ID starting with 9d8cf455beb8b33f9edaf8ae9d0f3c9190b2b52cc2ca69fa278889feae65f794 not found: ID does not exist" Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.454193 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af703be5-942b-4093-9b57-e26620431af9-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.454226 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af703be5-942b-4093-9b57-e26620431af9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.454242 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r5qp\" (UniqueName: \"kubernetes.io/projected/af703be5-942b-4093-9b57-e26620431af9-kube-api-access-4r5qp\") on node \"crc\" DevicePath \"\"" Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.710090 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g62dp"] Jan 27 12:50:48 crc kubenswrapper[4745]: I0127 12:50:48.715244 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g62dp"] Jan 27 12:50:50 crc kubenswrapper[4745]: I0127 12:50:50.082479 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af703be5-942b-4093-9b57-e26620431af9" path="/var/lib/kubelet/pods/af703be5-942b-4093-9b57-e26620431af9/volumes" Jan 27 12:51:35 crc kubenswrapper[4745]: I0127 12:51:35.967072 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:51:35 crc kubenswrapper[4745]: I0127 12:51:35.967522 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:52:05 crc kubenswrapper[4745]: I0127 12:52:05.967159 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:52:05 crc kubenswrapper[4745]: I0127 12:52:05.967793 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:52:35 crc kubenswrapper[4745]: I0127 12:52:35.967208 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 12:52:35 crc kubenswrapper[4745]: I0127 12:52:35.967769 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 12:52:35 crc kubenswrapper[4745]: I0127 12:52:35.967836 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 12:52:35 crc kubenswrapper[4745]: I0127 12:52:35.968443 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0"} pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 12:52:35 crc kubenswrapper[4745]: I0127 12:52:35.968499 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" containerID="cri-o://ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" gracePeriod=600 Jan 27 12:52:36 crc kubenswrapper[4745]: E0127 12:52:36.090573 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:52:36 crc kubenswrapper[4745]: I0127 12:52:36.134090 4745 generic.go:334] "Generic (PLEG): container finished" podID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" exitCode=0 Jan 27 12:52:36 crc kubenswrapper[4745]: I0127 12:52:36.134157 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerDied","Data":"ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0"} Jan 27 12:52:36 crc kubenswrapper[4745]: I0127 12:52:36.134225 4745 scope.go:117] "RemoveContainer" containerID="7326449f06a2e234c2eecde59157e50c93e734223f6ac7d6d6560b60db07b650" Jan 27 12:52:36 crc kubenswrapper[4745]: I0127 12:52:36.134954 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:52:36 crc kubenswrapper[4745]: E0127 12:52:36.135309 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:52:49 crc kubenswrapper[4745]: I0127 12:52:49.074249 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:52:49 crc kubenswrapper[4745]: E0127 12:52:49.075225 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:53:01 crc kubenswrapper[4745]: I0127 12:53:01.073658 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:53:01 crc kubenswrapper[4745]: E0127 12:53:01.074352 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:53:13 crc kubenswrapper[4745]: I0127 12:53:13.074034 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:53:13 crc kubenswrapper[4745]: E0127 12:53:13.074704 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:53:26 crc kubenswrapper[4745]: I0127 12:53:26.075013 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:53:26 crc kubenswrapper[4745]: E0127 12:53:26.077608 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:53:37 crc kubenswrapper[4745]: I0127 12:53:37.074161 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:53:37 crc kubenswrapper[4745]: E0127 12:53:37.074737 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:53:49 crc kubenswrapper[4745]: I0127 12:53:49.074682 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:53:49 crc kubenswrapper[4745]: E0127 12:53:49.076021 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:54:01 crc kubenswrapper[4745]: I0127 12:54:01.074512 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:54:01 crc kubenswrapper[4745]: E0127 12:54:01.075337 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:54:14 crc kubenswrapper[4745]: I0127 12:54:14.073954 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:54:14 crc kubenswrapper[4745]: E0127 12:54:14.074762 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:54:25 crc kubenswrapper[4745]: I0127 12:54:25.076579 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:54:25 crc kubenswrapper[4745]: E0127 12:54:25.080369 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:54:37 crc kubenswrapper[4745]: I0127 12:54:37.073946 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:54:37 crc kubenswrapper[4745]: E0127 12:54:37.074701 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:54:48 crc kubenswrapper[4745]: I0127 12:54:48.079731 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:54:48 crc kubenswrapper[4745]: E0127 12:54:48.080378 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:55:00 crc kubenswrapper[4745]: I0127 12:55:00.074132 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:55:00 crc kubenswrapper[4745]: E0127 12:55:00.075156 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:55:11 crc kubenswrapper[4745]: I0127 12:55:11.073601 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:55:11 crc kubenswrapper[4745]: E0127 12:55:11.074754 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:55:23 crc kubenswrapper[4745]: I0127 12:55:23.073882 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:55:23 crc kubenswrapper[4745]: E0127 12:55:23.074597 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:55:34 crc kubenswrapper[4745]: I0127 12:55:34.074556 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:55:34 crc kubenswrapper[4745]: E0127 12:55:34.075240 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:55:46 crc kubenswrapper[4745]: I0127 12:55:46.074068 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:55:46 crc kubenswrapper[4745]: E0127 12:55:46.074903 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:55:59 crc kubenswrapper[4745]: I0127 12:55:59.073860 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:55:59 crc kubenswrapper[4745]: E0127 12:55:59.074646 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:56:14 crc kubenswrapper[4745]: I0127 12:56:14.074179 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:56:14 crc kubenswrapper[4745]: E0127 12:56:14.074876 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:56:29 crc kubenswrapper[4745]: I0127 12:56:29.073903 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:56:29 crc kubenswrapper[4745]: E0127 12:56:29.074574 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:56:40 crc kubenswrapper[4745]: I0127 12:56:40.074094 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:56:40 crc kubenswrapper[4745]: E0127 12:56:40.075318 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:56:55 crc kubenswrapper[4745]: I0127 12:56:55.074183 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:56:55 crc kubenswrapper[4745]: E0127 12:56:55.074962 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:57:06 crc kubenswrapper[4745]: I0127 12:57:06.073250 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:57:06 crc kubenswrapper[4745]: E0127 12:57:06.073890 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:57:19 crc kubenswrapper[4745]: I0127 12:57:19.073765 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:57:19 crc kubenswrapper[4745]: E0127 12:57:19.074848 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:57:31 crc kubenswrapper[4745]: I0127 12:57:31.074066 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:57:31 crc kubenswrapper[4745]: E0127 12:57:31.081099 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 12:57:45 crc kubenswrapper[4745]: I0127 12:57:45.073707 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 12:57:46 crc kubenswrapper[4745]: I0127 12:57:46.342126 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerStarted","Data":"13887eb0088f3e5d43d51e708ce4f207dcbfd90318c899c53ec115321a09107c"} Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.147621 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491980-tk2bd"] Jan 27 13:00:00 crc kubenswrapper[4745]: E0127 13:00:00.148590 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af703be5-942b-4093-9b57-e26620431af9" containerName="registry-server" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.148607 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="af703be5-942b-4093-9b57-e26620431af9" containerName="registry-server" Jan 27 13:00:00 crc kubenswrapper[4745]: E0127 13:00:00.148623 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af703be5-942b-4093-9b57-e26620431af9" containerName="extract-utilities" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.148633 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="af703be5-942b-4093-9b57-e26620431af9" containerName="extract-utilities" Jan 27 13:00:00 crc kubenswrapper[4745]: E0127 13:00:00.148654 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb68b3af-a8a6-43be-95ff-ba6ef88c4867" containerName="extract-utilities" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.148662 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb68b3af-a8a6-43be-95ff-ba6ef88c4867" containerName="extract-utilities" Jan 27 13:00:00 crc kubenswrapper[4745]: E0127 13:00:00.148685 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb68b3af-a8a6-43be-95ff-ba6ef88c4867" containerName="extract-content" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.148694 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb68b3af-a8a6-43be-95ff-ba6ef88c4867" containerName="extract-content" Jan 27 13:00:00 crc kubenswrapper[4745]: E0127 13:00:00.148705 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb68b3af-a8a6-43be-95ff-ba6ef88c4867" containerName="registry-server" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.148713 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb68b3af-a8a6-43be-95ff-ba6ef88c4867" containerName="registry-server" Jan 27 13:00:00 crc kubenswrapper[4745]: E0127 13:00:00.148729 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af703be5-942b-4093-9b57-e26620431af9" containerName="extract-content" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.148736 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="af703be5-942b-4093-9b57-e26620431af9" containerName="extract-content" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.148931 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb68b3af-a8a6-43be-95ff-ba6ef88c4867" containerName="registry-server" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.148949 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="af703be5-942b-4093-9b57-e26620431af9" containerName="registry-server" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.149507 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-tk2bd" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.152381 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.152711 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.160611 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491980-tk2bd"] Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.270692 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e6c07dd-378e-491d-a6fe-5dfdf2a661db-config-volume\") pod \"collect-profiles-29491980-tk2bd\" (UID: \"9e6c07dd-378e-491d-a6fe-5dfdf2a661db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-tk2bd" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.270913 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e6c07dd-378e-491d-a6fe-5dfdf2a661db-secret-volume\") pod \"collect-profiles-29491980-tk2bd\" (UID: \"9e6c07dd-378e-491d-a6fe-5dfdf2a661db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-tk2bd" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.270973 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j2j9\" (UniqueName: \"kubernetes.io/projected/9e6c07dd-378e-491d-a6fe-5dfdf2a661db-kube-api-access-4j2j9\") pod \"collect-profiles-29491980-tk2bd\" (UID: \"9e6c07dd-378e-491d-a6fe-5dfdf2a661db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-tk2bd" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.372085 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e6c07dd-378e-491d-a6fe-5dfdf2a661db-secret-volume\") pod \"collect-profiles-29491980-tk2bd\" (UID: \"9e6c07dd-378e-491d-a6fe-5dfdf2a661db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-tk2bd" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.372167 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j2j9\" (UniqueName: \"kubernetes.io/projected/9e6c07dd-378e-491d-a6fe-5dfdf2a661db-kube-api-access-4j2j9\") pod \"collect-profiles-29491980-tk2bd\" (UID: \"9e6c07dd-378e-491d-a6fe-5dfdf2a661db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-tk2bd" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.372233 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e6c07dd-378e-491d-a6fe-5dfdf2a661db-config-volume\") pod \"collect-profiles-29491980-tk2bd\" (UID: \"9e6c07dd-378e-491d-a6fe-5dfdf2a661db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-tk2bd" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.373238 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e6c07dd-378e-491d-a6fe-5dfdf2a661db-config-volume\") pod \"collect-profiles-29491980-tk2bd\" (UID: \"9e6c07dd-378e-491d-a6fe-5dfdf2a661db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-tk2bd" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.379695 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e6c07dd-378e-491d-a6fe-5dfdf2a661db-secret-volume\") pod \"collect-profiles-29491980-tk2bd\" (UID: \"9e6c07dd-378e-491d-a6fe-5dfdf2a661db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-tk2bd" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.390548 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j2j9\" (UniqueName: \"kubernetes.io/projected/9e6c07dd-378e-491d-a6fe-5dfdf2a661db-kube-api-access-4j2j9\") pod \"collect-profiles-29491980-tk2bd\" (UID: \"9e6c07dd-378e-491d-a6fe-5dfdf2a661db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-tk2bd" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.472534 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-tk2bd" Jan 27 13:00:00 crc kubenswrapper[4745]: I0127 13:00:00.965141 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491980-tk2bd"] Jan 27 13:00:01 crc kubenswrapper[4745]: I0127 13:00:01.338678 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-tk2bd" event={"ID":"9e6c07dd-378e-491d-a6fe-5dfdf2a661db","Type":"ContainerStarted","Data":"0da84f05bd04cd32c546a51a7bbc80509e9b78896db97a3b1f637f35b4411ead"} Jan 27 13:00:01 crc kubenswrapper[4745]: I0127 13:00:01.338719 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-tk2bd" event={"ID":"9e6c07dd-378e-491d-a6fe-5dfdf2a661db","Type":"ContainerStarted","Data":"a3cb6dd8f8e678735a2227cac6a8eb3284b8291723e4a65ec3de2d1234a87353"} Jan 27 13:00:01 crc kubenswrapper[4745]: I0127 13:00:01.363708 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-tk2bd" podStartSLOduration=1.363687736 podStartE2EDuration="1.363687736s" podCreationTimestamp="2026-01-27 13:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:00:01.357552684 +0000 UTC m=+2894.162463372" watchObservedRunningTime="2026-01-27 13:00:01.363687736 +0000 UTC m=+2894.168598424" Jan 27 13:00:02 crc kubenswrapper[4745]: I0127 13:00:02.350098 4745 generic.go:334] "Generic (PLEG): container finished" podID="9e6c07dd-378e-491d-a6fe-5dfdf2a661db" containerID="0da84f05bd04cd32c546a51a7bbc80509e9b78896db97a3b1f637f35b4411ead" exitCode=0 Jan 27 13:00:02 crc kubenswrapper[4745]: I0127 13:00:02.350207 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-tk2bd" event={"ID":"9e6c07dd-378e-491d-a6fe-5dfdf2a661db","Type":"ContainerDied","Data":"0da84f05bd04cd32c546a51a7bbc80509e9b78896db97a3b1f637f35b4411ead"} Jan 27 13:00:03 crc kubenswrapper[4745]: I0127 13:00:03.619181 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-tk2bd" Jan 27 13:00:03 crc kubenswrapper[4745]: I0127 13:00:03.726590 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e6c07dd-378e-491d-a6fe-5dfdf2a661db-secret-volume\") pod \"9e6c07dd-378e-491d-a6fe-5dfdf2a661db\" (UID: \"9e6c07dd-378e-491d-a6fe-5dfdf2a661db\") " Jan 27 13:00:03 crc kubenswrapper[4745]: I0127 13:00:03.726672 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j2j9\" (UniqueName: \"kubernetes.io/projected/9e6c07dd-378e-491d-a6fe-5dfdf2a661db-kube-api-access-4j2j9\") pod \"9e6c07dd-378e-491d-a6fe-5dfdf2a661db\" (UID: \"9e6c07dd-378e-491d-a6fe-5dfdf2a661db\") " Jan 27 13:00:03 crc kubenswrapper[4745]: I0127 13:00:03.726725 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e6c07dd-378e-491d-a6fe-5dfdf2a661db-config-volume\") pod \"9e6c07dd-378e-491d-a6fe-5dfdf2a661db\" (UID: \"9e6c07dd-378e-491d-a6fe-5dfdf2a661db\") " Jan 27 13:00:03 crc kubenswrapper[4745]: I0127 13:00:03.727388 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e6c07dd-378e-491d-a6fe-5dfdf2a661db-config-volume" (OuterVolumeSpecName: "config-volume") pod "9e6c07dd-378e-491d-a6fe-5dfdf2a661db" (UID: "9e6c07dd-378e-491d-a6fe-5dfdf2a661db"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:00:03 crc kubenswrapper[4745]: I0127 13:00:03.733391 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e6c07dd-378e-491d-a6fe-5dfdf2a661db-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9e6c07dd-378e-491d-a6fe-5dfdf2a661db" (UID: "9e6c07dd-378e-491d-a6fe-5dfdf2a661db"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:00:03 crc kubenswrapper[4745]: I0127 13:00:03.734346 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e6c07dd-378e-491d-a6fe-5dfdf2a661db-kube-api-access-4j2j9" (OuterVolumeSpecName: "kube-api-access-4j2j9") pod "9e6c07dd-378e-491d-a6fe-5dfdf2a661db" (UID: "9e6c07dd-378e-491d-a6fe-5dfdf2a661db"). InnerVolumeSpecName "kube-api-access-4j2j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:00:03 crc kubenswrapper[4745]: I0127 13:00:03.827985 4745 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e6c07dd-378e-491d-a6fe-5dfdf2a661db-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 13:00:03 crc kubenswrapper[4745]: I0127 13:00:03.828013 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j2j9\" (UniqueName: \"kubernetes.io/projected/9e6c07dd-378e-491d-a6fe-5dfdf2a661db-kube-api-access-4j2j9\") on node \"crc\" DevicePath \"\"" Jan 27 13:00:03 crc kubenswrapper[4745]: I0127 13:00:03.828023 4745 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e6c07dd-378e-491d-a6fe-5dfdf2a661db-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 13:00:04 crc kubenswrapper[4745]: I0127 13:00:04.365045 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-tk2bd" event={"ID":"9e6c07dd-378e-491d-a6fe-5dfdf2a661db","Type":"ContainerDied","Data":"a3cb6dd8f8e678735a2227cac6a8eb3284b8291723e4a65ec3de2d1234a87353"} Jan 27 13:00:04 crc kubenswrapper[4745]: I0127 13:00:04.365424 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3cb6dd8f8e678735a2227cac6a8eb3284b8291723e4a65ec3de2d1234a87353" Jan 27 13:00:04 crc kubenswrapper[4745]: I0127 13:00:04.365190 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-tk2bd" Jan 27 13:00:04 crc kubenswrapper[4745]: I0127 13:00:04.432113 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8"] Jan 27 13:00:04 crc kubenswrapper[4745]: I0127 13:00:04.440235 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491935-z6lc8"] Jan 27 13:00:05 crc kubenswrapper[4745]: I0127 13:00:05.966916 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:00:05 crc kubenswrapper[4745]: I0127 13:00:05.967204 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:00:06 crc kubenswrapper[4745]: I0127 13:00:06.081581 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe4ab457-bf86-43e0-898e-d7d1b5965142" path="/var/lib/kubelet/pods/fe4ab457-bf86-43e0-898e-d7d1b5965142/volumes" Jan 27 13:00:35 crc kubenswrapper[4745]: I0127 13:00:35.967498 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:00:35 crc kubenswrapper[4745]: I0127 13:00:35.968059 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:00:48 crc kubenswrapper[4745]: I0127 13:00:48.989434 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-44mmr"] Jan 27 13:00:48 crc kubenswrapper[4745]: E0127 13:00:48.990316 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6c07dd-378e-491d-a6fe-5dfdf2a661db" containerName="collect-profiles" Jan 27 13:00:48 crc kubenswrapper[4745]: I0127 13:00:48.990330 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6c07dd-378e-491d-a6fe-5dfdf2a661db" containerName="collect-profiles" Jan 27 13:00:48 crc kubenswrapper[4745]: I0127 13:00:48.990537 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e6c07dd-378e-491d-a6fe-5dfdf2a661db" containerName="collect-profiles" Jan 27 13:00:48 crc kubenswrapper[4745]: I0127 13:00:48.991718 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44mmr" Jan 27 13:00:49 crc kubenswrapper[4745]: I0127 13:00:49.008716 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-44mmr"] Jan 27 13:00:49 crc kubenswrapper[4745]: I0127 13:00:49.118219 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db65d64c-c028-4048-9d34-623647d32012-catalog-content\") pod \"redhat-marketplace-44mmr\" (UID: \"db65d64c-c028-4048-9d34-623647d32012\") " pod="openshift-marketplace/redhat-marketplace-44mmr" Jan 27 13:00:49 crc kubenswrapper[4745]: I0127 13:00:49.118590 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnzkc\" (UniqueName: \"kubernetes.io/projected/db65d64c-c028-4048-9d34-623647d32012-kube-api-access-dnzkc\") pod \"redhat-marketplace-44mmr\" (UID: \"db65d64c-c028-4048-9d34-623647d32012\") " pod="openshift-marketplace/redhat-marketplace-44mmr" Jan 27 13:00:49 crc kubenswrapper[4745]: I0127 13:00:49.118691 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db65d64c-c028-4048-9d34-623647d32012-utilities\") pod \"redhat-marketplace-44mmr\" (UID: \"db65d64c-c028-4048-9d34-623647d32012\") " pod="openshift-marketplace/redhat-marketplace-44mmr" Jan 27 13:00:49 crc kubenswrapper[4745]: I0127 13:00:49.219395 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db65d64c-c028-4048-9d34-623647d32012-utilities\") pod \"redhat-marketplace-44mmr\" (UID: \"db65d64c-c028-4048-9d34-623647d32012\") " pod="openshift-marketplace/redhat-marketplace-44mmr" Jan 27 13:00:49 crc kubenswrapper[4745]: I0127 13:00:49.219467 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db65d64c-c028-4048-9d34-623647d32012-catalog-content\") pod \"redhat-marketplace-44mmr\" (UID: \"db65d64c-c028-4048-9d34-623647d32012\") " pod="openshift-marketplace/redhat-marketplace-44mmr" Jan 27 13:00:49 crc kubenswrapper[4745]: I0127 13:00:49.219523 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnzkc\" (UniqueName: \"kubernetes.io/projected/db65d64c-c028-4048-9d34-623647d32012-kube-api-access-dnzkc\") pod \"redhat-marketplace-44mmr\" (UID: \"db65d64c-c028-4048-9d34-623647d32012\") " pod="openshift-marketplace/redhat-marketplace-44mmr" Jan 27 13:00:49 crc kubenswrapper[4745]: I0127 13:00:49.221232 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db65d64c-c028-4048-9d34-623647d32012-catalog-content\") pod \"redhat-marketplace-44mmr\" (UID: \"db65d64c-c028-4048-9d34-623647d32012\") " pod="openshift-marketplace/redhat-marketplace-44mmr" Jan 27 13:00:49 crc kubenswrapper[4745]: I0127 13:00:49.221280 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db65d64c-c028-4048-9d34-623647d32012-utilities\") pod \"redhat-marketplace-44mmr\" (UID: \"db65d64c-c028-4048-9d34-623647d32012\") " pod="openshift-marketplace/redhat-marketplace-44mmr" Jan 27 13:00:49 crc kubenswrapper[4745]: I0127 13:00:49.240442 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnzkc\" (UniqueName: \"kubernetes.io/projected/db65d64c-c028-4048-9d34-623647d32012-kube-api-access-dnzkc\") pod \"redhat-marketplace-44mmr\" (UID: \"db65d64c-c028-4048-9d34-623647d32012\") " pod="openshift-marketplace/redhat-marketplace-44mmr" Jan 27 13:00:49 crc kubenswrapper[4745]: I0127 13:00:49.314478 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44mmr" Jan 27 13:00:49 crc kubenswrapper[4745]: I0127 13:00:49.785046 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-44mmr"] Jan 27 13:00:50 crc kubenswrapper[4745]: I0127 13:00:50.730572 4745 generic.go:334] "Generic (PLEG): container finished" podID="db65d64c-c028-4048-9d34-623647d32012" containerID="ddef14db161708483b0996360019b5908cc5c02ba6bf2adf832d4a229b04976a" exitCode=0 Jan 27 13:00:50 crc kubenswrapper[4745]: I0127 13:00:50.730660 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44mmr" event={"ID":"db65d64c-c028-4048-9d34-623647d32012","Type":"ContainerDied","Data":"ddef14db161708483b0996360019b5908cc5c02ba6bf2adf832d4a229b04976a"} Jan 27 13:00:50 crc kubenswrapper[4745]: I0127 13:00:50.730714 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44mmr" event={"ID":"db65d64c-c028-4048-9d34-623647d32012","Type":"ContainerStarted","Data":"a5502f0e2fb8989d2c10872b905ae0d09ec752210eaf7f0deef27d96745160f4"} Jan 27 13:00:50 crc kubenswrapper[4745]: I0127 13:00:50.733980 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 13:00:51 crc kubenswrapper[4745]: I0127 13:00:51.740232 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44mmr" event={"ID":"db65d64c-c028-4048-9d34-623647d32012","Type":"ContainerStarted","Data":"5a81e2fab585c503356b1b5d5c0260b5dd7b9d565cd5b1614b24029b8cb6018e"} Jan 27 13:00:52 crc kubenswrapper[4745]: I0127 13:00:52.753956 4745 generic.go:334] "Generic (PLEG): container finished" podID="db65d64c-c028-4048-9d34-623647d32012" containerID="5a81e2fab585c503356b1b5d5c0260b5dd7b9d565cd5b1614b24029b8cb6018e" exitCode=0 Jan 27 13:00:52 crc kubenswrapper[4745]: I0127 13:00:52.754009 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44mmr" event={"ID":"db65d64c-c028-4048-9d34-623647d32012","Type":"ContainerDied","Data":"5a81e2fab585c503356b1b5d5c0260b5dd7b9d565cd5b1614b24029b8cb6018e"} Jan 27 13:00:53 crc kubenswrapper[4745]: I0127 13:00:53.769913 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44mmr" event={"ID":"db65d64c-c028-4048-9d34-623647d32012","Type":"ContainerStarted","Data":"f59978b4d4005ce407ef1cd6a1e15be809c05ac534f293e5ea599f5bfd7dd4c5"} Jan 27 13:00:53 crc kubenswrapper[4745]: I0127 13:00:53.798566 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-44mmr" podStartSLOduration=3.283454569 podStartE2EDuration="5.79854628s" podCreationTimestamp="2026-01-27 13:00:48 +0000 UTC" firstStartedPulling="2026-01-27 13:00:50.733439989 +0000 UTC m=+2943.538350717" lastFinishedPulling="2026-01-27 13:00:53.24853173 +0000 UTC m=+2946.053442428" observedRunningTime="2026-01-27 13:00:53.790500924 +0000 UTC m=+2946.595411632" watchObservedRunningTime="2026-01-27 13:00:53.79854628 +0000 UTC m=+2946.603456968" Jan 27 13:00:56 crc kubenswrapper[4745]: I0127 13:00:56.652882 4745 scope.go:117] "RemoveContainer" containerID="98de81cd3cf99a456d6f032e8b1e10e974d3d59178e978e3724c124977df0e76" Jan 27 13:00:57 crc kubenswrapper[4745]: I0127 13:00:57.881432 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m2rz8"] Jan 27 13:00:57 crc kubenswrapper[4745]: I0127 13:00:57.883391 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m2rz8" Jan 27 13:00:57 crc kubenswrapper[4745]: I0127 13:00:57.894382 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m2rz8"] Jan 27 13:00:57 crc kubenswrapper[4745]: I0127 13:00:57.952502 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3167571-2811-45ab-9830-58f9f5ba890f-utilities\") pod \"community-operators-m2rz8\" (UID: \"d3167571-2811-45ab-9830-58f9f5ba890f\") " pod="openshift-marketplace/community-operators-m2rz8" Jan 27 13:00:57 crc kubenswrapper[4745]: I0127 13:00:57.952935 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98472\" (UniqueName: \"kubernetes.io/projected/d3167571-2811-45ab-9830-58f9f5ba890f-kube-api-access-98472\") pod \"community-operators-m2rz8\" (UID: \"d3167571-2811-45ab-9830-58f9f5ba890f\") " pod="openshift-marketplace/community-operators-m2rz8" Jan 27 13:00:57 crc kubenswrapper[4745]: I0127 13:00:57.953073 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3167571-2811-45ab-9830-58f9f5ba890f-catalog-content\") pod \"community-operators-m2rz8\" (UID: \"d3167571-2811-45ab-9830-58f9f5ba890f\") " pod="openshift-marketplace/community-operators-m2rz8" Jan 27 13:00:58 crc kubenswrapper[4745]: I0127 13:00:58.054744 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98472\" (UniqueName: \"kubernetes.io/projected/d3167571-2811-45ab-9830-58f9f5ba890f-kube-api-access-98472\") pod \"community-operators-m2rz8\" (UID: \"d3167571-2811-45ab-9830-58f9f5ba890f\") " pod="openshift-marketplace/community-operators-m2rz8" Jan 27 13:00:58 crc kubenswrapper[4745]: I0127 13:00:58.054800 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3167571-2811-45ab-9830-58f9f5ba890f-catalog-content\") pod \"community-operators-m2rz8\" (UID: \"d3167571-2811-45ab-9830-58f9f5ba890f\") " pod="openshift-marketplace/community-operators-m2rz8" Jan 27 13:00:58 crc kubenswrapper[4745]: I0127 13:00:58.054882 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3167571-2811-45ab-9830-58f9f5ba890f-utilities\") pod \"community-operators-m2rz8\" (UID: \"d3167571-2811-45ab-9830-58f9f5ba890f\") " pod="openshift-marketplace/community-operators-m2rz8" Jan 27 13:00:58 crc kubenswrapper[4745]: I0127 13:00:58.055436 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3167571-2811-45ab-9830-58f9f5ba890f-utilities\") pod \"community-operators-m2rz8\" (UID: \"d3167571-2811-45ab-9830-58f9f5ba890f\") " pod="openshift-marketplace/community-operators-m2rz8" Jan 27 13:00:58 crc kubenswrapper[4745]: I0127 13:00:58.055577 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3167571-2811-45ab-9830-58f9f5ba890f-catalog-content\") pod \"community-operators-m2rz8\" (UID: \"d3167571-2811-45ab-9830-58f9f5ba890f\") " pod="openshift-marketplace/community-operators-m2rz8" Jan 27 13:00:58 crc kubenswrapper[4745]: I0127 13:00:58.084114 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98472\" (UniqueName: \"kubernetes.io/projected/d3167571-2811-45ab-9830-58f9f5ba890f-kube-api-access-98472\") pod \"community-operators-m2rz8\" (UID: \"d3167571-2811-45ab-9830-58f9f5ba890f\") " pod="openshift-marketplace/community-operators-m2rz8" Jan 27 13:00:58 crc kubenswrapper[4745]: I0127 13:00:58.203708 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m2rz8" Jan 27 13:00:58 crc kubenswrapper[4745]: I0127 13:00:58.709948 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m2rz8"] Jan 27 13:00:58 crc kubenswrapper[4745]: I0127 13:00:58.805542 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m2rz8" event={"ID":"d3167571-2811-45ab-9830-58f9f5ba890f","Type":"ContainerStarted","Data":"003a3609c78db08509f96ff16161b3be3173dfb4d8e67163476c6994a8ecece0"} Jan 27 13:00:59 crc kubenswrapper[4745]: I0127 13:00:59.315026 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-44mmr" Jan 27 13:00:59 crc kubenswrapper[4745]: I0127 13:00:59.315973 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-44mmr" Jan 27 13:00:59 crc kubenswrapper[4745]: I0127 13:00:59.367301 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-44mmr" Jan 27 13:00:59 crc kubenswrapper[4745]: I0127 13:00:59.814545 4745 generic.go:334] "Generic (PLEG): container finished" podID="d3167571-2811-45ab-9830-58f9f5ba890f" containerID="55c06e0c9fe1cb3e41cbaaf688692b1d99ed8e65f23aa9f0c8994930fe54a0a7" exitCode=0 Jan 27 13:00:59 crc kubenswrapper[4745]: I0127 13:00:59.814668 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m2rz8" event={"ID":"d3167571-2811-45ab-9830-58f9f5ba890f","Type":"ContainerDied","Data":"55c06e0c9fe1cb3e41cbaaf688692b1d99ed8e65f23aa9f0c8994930fe54a0a7"} Jan 27 13:00:59 crc kubenswrapper[4745]: I0127 13:00:59.860361 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-44mmr" Jan 27 13:01:00 crc kubenswrapper[4745]: I0127 13:01:00.826253 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m2rz8" event={"ID":"d3167571-2811-45ab-9830-58f9f5ba890f","Type":"ContainerStarted","Data":"805c9d6c75f21854814c1b9b9c1619ef4cce73b09ff27a5bfb0f4eb755dafdba"} Jan 27 13:01:01 crc kubenswrapper[4745]: I0127 13:01:01.662776 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-44mmr"] Jan 27 13:01:01 crc kubenswrapper[4745]: I0127 13:01:01.837046 4745 generic.go:334] "Generic (PLEG): container finished" podID="d3167571-2811-45ab-9830-58f9f5ba890f" containerID="805c9d6c75f21854814c1b9b9c1619ef4cce73b09ff27a5bfb0f4eb755dafdba" exitCode=0 Jan 27 13:01:01 crc kubenswrapper[4745]: I0127 13:01:01.837129 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m2rz8" event={"ID":"d3167571-2811-45ab-9830-58f9f5ba890f","Type":"ContainerDied","Data":"805c9d6c75f21854814c1b9b9c1619ef4cce73b09ff27a5bfb0f4eb755dafdba"} Jan 27 13:01:01 crc kubenswrapper[4745]: I0127 13:01:01.837237 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-44mmr" podUID="db65d64c-c028-4048-9d34-623647d32012" containerName="registry-server" containerID="cri-o://f59978b4d4005ce407ef1cd6a1e15be809c05ac534f293e5ea599f5bfd7dd4c5" gracePeriod=2 Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.259910 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44mmr" Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.312145 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnzkc\" (UniqueName: \"kubernetes.io/projected/db65d64c-c028-4048-9d34-623647d32012-kube-api-access-dnzkc\") pod \"db65d64c-c028-4048-9d34-623647d32012\" (UID: \"db65d64c-c028-4048-9d34-623647d32012\") " Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.312250 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db65d64c-c028-4048-9d34-623647d32012-utilities\") pod \"db65d64c-c028-4048-9d34-623647d32012\" (UID: \"db65d64c-c028-4048-9d34-623647d32012\") " Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.312311 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db65d64c-c028-4048-9d34-623647d32012-catalog-content\") pod \"db65d64c-c028-4048-9d34-623647d32012\" (UID: \"db65d64c-c028-4048-9d34-623647d32012\") " Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.313355 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db65d64c-c028-4048-9d34-623647d32012-utilities" (OuterVolumeSpecName: "utilities") pod "db65d64c-c028-4048-9d34-623647d32012" (UID: "db65d64c-c028-4048-9d34-623647d32012"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.319794 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db65d64c-c028-4048-9d34-623647d32012-kube-api-access-dnzkc" (OuterVolumeSpecName: "kube-api-access-dnzkc") pod "db65d64c-c028-4048-9d34-623647d32012" (UID: "db65d64c-c028-4048-9d34-623647d32012"). InnerVolumeSpecName "kube-api-access-dnzkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.367367 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db65d64c-c028-4048-9d34-623647d32012-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db65d64c-c028-4048-9d34-623647d32012" (UID: "db65d64c-c028-4048-9d34-623647d32012"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.414473 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db65d64c-c028-4048-9d34-623647d32012-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.414501 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db65d64c-c028-4048-9d34-623647d32012-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.414514 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnzkc\" (UniqueName: \"kubernetes.io/projected/db65d64c-c028-4048-9d34-623647d32012-kube-api-access-dnzkc\") on node \"crc\" DevicePath \"\"" Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.849866 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m2rz8" event={"ID":"d3167571-2811-45ab-9830-58f9f5ba890f","Type":"ContainerStarted","Data":"c0c747c9b28eb1623573e7b31fa562c9d183bf2eebaa7860eebe85f2243e49b5"} Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.853162 4745 generic.go:334] "Generic (PLEG): container finished" podID="db65d64c-c028-4048-9d34-623647d32012" containerID="f59978b4d4005ce407ef1cd6a1e15be809c05ac534f293e5ea599f5bfd7dd4c5" exitCode=0 Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.853347 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44mmr" event={"ID":"db65d64c-c028-4048-9d34-623647d32012","Type":"ContainerDied","Data":"f59978b4d4005ce407ef1cd6a1e15be809c05ac534f293e5ea599f5bfd7dd4c5"} Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.853511 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44mmr" event={"ID":"db65d64c-c028-4048-9d34-623647d32012","Type":"ContainerDied","Data":"a5502f0e2fb8989d2c10872b905ae0d09ec752210eaf7f0deef27d96745160f4"} Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.853586 4745 scope.go:117] "RemoveContainer" containerID="f59978b4d4005ce407ef1cd6a1e15be809c05ac534f293e5ea599f5bfd7dd4c5" Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.853842 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44mmr" Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.880854 4745 scope.go:117] "RemoveContainer" containerID="5a81e2fab585c503356b1b5d5c0260b5dd7b9d565cd5b1614b24029b8cb6018e" Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.904464 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m2rz8" podStartSLOduration=3.348544814 podStartE2EDuration="5.904438091s" podCreationTimestamp="2026-01-27 13:00:57 +0000 UTC" firstStartedPulling="2026-01-27 13:00:59.818051863 +0000 UTC m=+2952.622962551" lastFinishedPulling="2026-01-27 13:01:02.37394513 +0000 UTC m=+2955.178855828" observedRunningTime="2026-01-27 13:01:02.878312828 +0000 UTC m=+2955.683223546" watchObservedRunningTime="2026-01-27 13:01:02.904438091 +0000 UTC m=+2955.709348779" Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.909120 4745 scope.go:117] "RemoveContainer" containerID="ddef14db161708483b0996360019b5908cc5c02ba6bf2adf832d4a229b04976a" Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.909136 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-44mmr"] Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.926830 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-44mmr"] Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.929878 4745 scope.go:117] "RemoveContainer" containerID="f59978b4d4005ce407ef1cd6a1e15be809c05ac534f293e5ea599f5bfd7dd4c5" Jan 27 13:01:02 crc kubenswrapper[4745]: E0127 13:01:02.930284 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f59978b4d4005ce407ef1cd6a1e15be809c05ac534f293e5ea599f5bfd7dd4c5\": container with ID starting with f59978b4d4005ce407ef1cd6a1e15be809c05ac534f293e5ea599f5bfd7dd4c5 not found: ID does not exist" containerID="f59978b4d4005ce407ef1cd6a1e15be809c05ac534f293e5ea599f5bfd7dd4c5" Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.930326 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59978b4d4005ce407ef1cd6a1e15be809c05ac534f293e5ea599f5bfd7dd4c5"} err="failed to get container status \"f59978b4d4005ce407ef1cd6a1e15be809c05ac534f293e5ea599f5bfd7dd4c5\": rpc error: code = NotFound desc = could not find container \"f59978b4d4005ce407ef1cd6a1e15be809c05ac534f293e5ea599f5bfd7dd4c5\": container with ID starting with f59978b4d4005ce407ef1cd6a1e15be809c05ac534f293e5ea599f5bfd7dd4c5 not found: ID does not exist" Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.930354 4745 scope.go:117] "RemoveContainer" containerID="5a81e2fab585c503356b1b5d5c0260b5dd7b9d565cd5b1614b24029b8cb6018e" Jan 27 13:01:02 crc kubenswrapper[4745]: E0127 13:01:02.931007 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a81e2fab585c503356b1b5d5c0260b5dd7b9d565cd5b1614b24029b8cb6018e\": container with ID starting with 5a81e2fab585c503356b1b5d5c0260b5dd7b9d565cd5b1614b24029b8cb6018e not found: ID does not exist" containerID="5a81e2fab585c503356b1b5d5c0260b5dd7b9d565cd5b1614b24029b8cb6018e" Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.931063 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a81e2fab585c503356b1b5d5c0260b5dd7b9d565cd5b1614b24029b8cb6018e"} err="failed to get container status \"5a81e2fab585c503356b1b5d5c0260b5dd7b9d565cd5b1614b24029b8cb6018e\": rpc error: code = NotFound desc = could not find container \"5a81e2fab585c503356b1b5d5c0260b5dd7b9d565cd5b1614b24029b8cb6018e\": container with ID starting with 5a81e2fab585c503356b1b5d5c0260b5dd7b9d565cd5b1614b24029b8cb6018e not found: ID does not exist" Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.931097 4745 scope.go:117] "RemoveContainer" containerID="ddef14db161708483b0996360019b5908cc5c02ba6bf2adf832d4a229b04976a" Jan 27 13:01:02 crc kubenswrapper[4745]: E0127 13:01:02.931364 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddef14db161708483b0996360019b5908cc5c02ba6bf2adf832d4a229b04976a\": container with ID starting with ddef14db161708483b0996360019b5908cc5c02ba6bf2adf832d4a229b04976a not found: ID does not exist" containerID="ddef14db161708483b0996360019b5908cc5c02ba6bf2adf832d4a229b04976a" Jan 27 13:01:02 crc kubenswrapper[4745]: I0127 13:01:02.931399 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddef14db161708483b0996360019b5908cc5c02ba6bf2adf832d4a229b04976a"} err="failed to get container status \"ddef14db161708483b0996360019b5908cc5c02ba6bf2adf832d4a229b04976a\": rpc error: code = NotFound desc = could not find container \"ddef14db161708483b0996360019b5908cc5c02ba6bf2adf832d4a229b04976a\": container with ID starting with ddef14db161708483b0996360019b5908cc5c02ba6bf2adf832d4a229b04976a not found: ID does not exist" Jan 27 13:01:04 crc kubenswrapper[4745]: I0127 13:01:04.082466 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db65d64c-c028-4048-9d34-623647d32012" path="/var/lib/kubelet/pods/db65d64c-c028-4048-9d34-623647d32012/volumes" Jan 27 13:01:05 crc kubenswrapper[4745]: I0127 13:01:05.967510 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:01:05 crc kubenswrapper[4745]: I0127 13:01:05.967566 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:01:05 crc kubenswrapper[4745]: I0127 13:01:05.967605 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 13:01:05 crc kubenswrapper[4745]: I0127 13:01:05.968217 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13887eb0088f3e5d43d51e708ce4f207dcbfd90318c899c53ec115321a09107c"} pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:01:05 crc kubenswrapper[4745]: I0127 13:01:05.968270 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" containerID="cri-o://13887eb0088f3e5d43d51e708ce4f207dcbfd90318c899c53ec115321a09107c" gracePeriod=600 Jan 27 13:01:06 crc kubenswrapper[4745]: I0127 13:01:06.889160 4745 generic.go:334] "Generic (PLEG): container finished" podID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerID="13887eb0088f3e5d43d51e708ce4f207dcbfd90318c899c53ec115321a09107c" exitCode=0 Jan 27 13:01:06 crc kubenswrapper[4745]: I0127 13:01:06.889255 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerDied","Data":"13887eb0088f3e5d43d51e708ce4f207dcbfd90318c899c53ec115321a09107c"} Jan 27 13:01:06 crc kubenswrapper[4745]: I0127 13:01:06.889432 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerStarted","Data":"45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048"} Jan 27 13:01:06 crc kubenswrapper[4745]: I0127 13:01:06.889466 4745 scope.go:117] "RemoveContainer" containerID="ed1f7c0b98a457ae179fa927b1ad56a67772a2ab2b7c5d05138f58f05c4e2bc0" Jan 27 13:01:08 crc kubenswrapper[4745]: I0127 13:01:08.204413 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m2rz8" Jan 27 13:01:08 crc kubenswrapper[4745]: I0127 13:01:08.204995 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m2rz8" Jan 27 13:01:08 crc kubenswrapper[4745]: I0127 13:01:08.255431 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m2rz8" Jan 27 13:01:08 crc kubenswrapper[4745]: I0127 13:01:08.945109 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m2rz8" Jan 27 13:01:08 crc kubenswrapper[4745]: I0127 13:01:08.997232 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m2rz8"] Jan 27 13:01:10 crc kubenswrapper[4745]: I0127 13:01:10.924310 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m2rz8" podUID="d3167571-2811-45ab-9830-58f9f5ba890f" containerName="registry-server" containerID="cri-o://c0c747c9b28eb1623573e7b31fa562c9d183bf2eebaa7860eebe85f2243e49b5" gracePeriod=2 Jan 27 13:01:12 crc kubenswrapper[4745]: I0127 13:01:12.522728 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m2rz8" Jan 27 13:01:12 crc kubenswrapper[4745]: I0127 13:01:12.554299 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3167571-2811-45ab-9830-58f9f5ba890f-catalog-content\") pod \"d3167571-2811-45ab-9830-58f9f5ba890f\" (UID: \"d3167571-2811-45ab-9830-58f9f5ba890f\") " Jan 27 13:01:12 crc kubenswrapper[4745]: I0127 13:01:12.554356 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98472\" (UniqueName: \"kubernetes.io/projected/d3167571-2811-45ab-9830-58f9f5ba890f-kube-api-access-98472\") pod \"d3167571-2811-45ab-9830-58f9f5ba890f\" (UID: \"d3167571-2811-45ab-9830-58f9f5ba890f\") " Jan 27 13:01:12 crc kubenswrapper[4745]: I0127 13:01:12.554450 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3167571-2811-45ab-9830-58f9f5ba890f-utilities\") pod \"d3167571-2811-45ab-9830-58f9f5ba890f\" (UID: \"d3167571-2811-45ab-9830-58f9f5ba890f\") " Jan 27 13:01:12 crc kubenswrapper[4745]: I0127 13:01:12.555650 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3167571-2811-45ab-9830-58f9f5ba890f-utilities" (OuterVolumeSpecName: "utilities") pod "d3167571-2811-45ab-9830-58f9f5ba890f" (UID: "d3167571-2811-45ab-9830-58f9f5ba890f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:01:12 crc kubenswrapper[4745]: I0127 13:01:12.559557 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3167571-2811-45ab-9830-58f9f5ba890f-kube-api-access-98472" (OuterVolumeSpecName: "kube-api-access-98472") pod "d3167571-2811-45ab-9830-58f9f5ba890f" (UID: "d3167571-2811-45ab-9830-58f9f5ba890f"). InnerVolumeSpecName "kube-api-access-98472". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:01:12 crc kubenswrapper[4745]: I0127 13:01:12.608977 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3167571-2811-45ab-9830-58f9f5ba890f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3167571-2811-45ab-9830-58f9f5ba890f" (UID: "d3167571-2811-45ab-9830-58f9f5ba890f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:01:12 crc kubenswrapper[4745]: I0127 13:01:12.656701 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3167571-2811-45ab-9830-58f9f5ba890f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:01:12 crc kubenswrapper[4745]: I0127 13:01:12.656963 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98472\" (UniqueName: \"kubernetes.io/projected/d3167571-2811-45ab-9830-58f9f5ba890f-kube-api-access-98472\") on node \"crc\" DevicePath \"\"" Jan 27 13:01:12 crc kubenswrapper[4745]: I0127 13:01:12.657064 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3167571-2811-45ab-9830-58f9f5ba890f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:01:12 crc kubenswrapper[4745]: I0127 13:01:12.941165 4745 generic.go:334] "Generic (PLEG): container finished" podID="d3167571-2811-45ab-9830-58f9f5ba890f" containerID="c0c747c9b28eb1623573e7b31fa562c9d183bf2eebaa7860eebe85f2243e49b5" exitCode=0 Jan 27 13:01:12 crc kubenswrapper[4745]: I0127 13:01:12.941253 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m2rz8" event={"ID":"d3167571-2811-45ab-9830-58f9f5ba890f","Type":"ContainerDied","Data":"c0c747c9b28eb1623573e7b31fa562c9d183bf2eebaa7860eebe85f2243e49b5"} Jan 27 13:01:12 crc kubenswrapper[4745]: I0127 13:01:12.941293 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m2rz8" event={"ID":"d3167571-2811-45ab-9830-58f9f5ba890f","Type":"ContainerDied","Data":"003a3609c78db08509f96ff16161b3be3173dfb4d8e67163476c6994a8ecece0"} Jan 27 13:01:12 crc kubenswrapper[4745]: I0127 13:01:12.941321 4745 scope.go:117] "RemoveContainer" containerID="c0c747c9b28eb1623573e7b31fa562c9d183bf2eebaa7860eebe85f2243e49b5" Jan 27 13:01:12 crc kubenswrapper[4745]: I0127 13:01:12.941496 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m2rz8" Jan 27 13:01:12 crc kubenswrapper[4745]: I0127 13:01:12.962702 4745 scope.go:117] "RemoveContainer" containerID="805c9d6c75f21854814c1b9b9c1619ef4cce73b09ff27a5bfb0f4eb755dafdba" Jan 27 13:01:12 crc kubenswrapper[4745]: I0127 13:01:12.988256 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m2rz8"] Jan 27 13:01:12 crc kubenswrapper[4745]: I0127 13:01:12.994190 4745 scope.go:117] "RemoveContainer" containerID="55c06e0c9fe1cb3e41cbaaf688692b1d99ed8e65f23aa9f0c8994930fe54a0a7" Jan 27 13:01:12 crc kubenswrapper[4745]: I0127 13:01:12.997317 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m2rz8"] Jan 27 13:01:13 crc kubenswrapper[4745]: I0127 13:01:13.017374 4745 scope.go:117] "RemoveContainer" containerID="c0c747c9b28eb1623573e7b31fa562c9d183bf2eebaa7860eebe85f2243e49b5" Jan 27 13:01:13 crc kubenswrapper[4745]: E0127 13:01:13.017845 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0c747c9b28eb1623573e7b31fa562c9d183bf2eebaa7860eebe85f2243e49b5\": container with ID starting with c0c747c9b28eb1623573e7b31fa562c9d183bf2eebaa7860eebe85f2243e49b5 not found: ID does not exist" containerID="c0c747c9b28eb1623573e7b31fa562c9d183bf2eebaa7860eebe85f2243e49b5" Jan 27 13:01:13 crc kubenswrapper[4745]: I0127 13:01:13.017882 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0c747c9b28eb1623573e7b31fa562c9d183bf2eebaa7860eebe85f2243e49b5"} err="failed to get container status \"c0c747c9b28eb1623573e7b31fa562c9d183bf2eebaa7860eebe85f2243e49b5\": rpc error: code = NotFound desc = could not find container \"c0c747c9b28eb1623573e7b31fa562c9d183bf2eebaa7860eebe85f2243e49b5\": container with ID starting with c0c747c9b28eb1623573e7b31fa562c9d183bf2eebaa7860eebe85f2243e49b5 not found: ID does not exist" Jan 27 13:01:13 crc kubenswrapper[4745]: I0127 13:01:13.017911 4745 scope.go:117] "RemoveContainer" containerID="805c9d6c75f21854814c1b9b9c1619ef4cce73b09ff27a5bfb0f4eb755dafdba" Jan 27 13:01:13 crc kubenswrapper[4745]: E0127 13:01:13.018387 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"805c9d6c75f21854814c1b9b9c1619ef4cce73b09ff27a5bfb0f4eb755dafdba\": container with ID starting with 805c9d6c75f21854814c1b9b9c1619ef4cce73b09ff27a5bfb0f4eb755dafdba not found: ID does not exist" containerID="805c9d6c75f21854814c1b9b9c1619ef4cce73b09ff27a5bfb0f4eb755dafdba" Jan 27 13:01:13 crc kubenswrapper[4745]: I0127 13:01:13.018436 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"805c9d6c75f21854814c1b9b9c1619ef4cce73b09ff27a5bfb0f4eb755dafdba"} err="failed to get container status \"805c9d6c75f21854814c1b9b9c1619ef4cce73b09ff27a5bfb0f4eb755dafdba\": rpc error: code = NotFound desc = could not find container \"805c9d6c75f21854814c1b9b9c1619ef4cce73b09ff27a5bfb0f4eb755dafdba\": container with ID starting with 805c9d6c75f21854814c1b9b9c1619ef4cce73b09ff27a5bfb0f4eb755dafdba not found: ID does not exist" Jan 27 13:01:13 crc kubenswrapper[4745]: I0127 13:01:13.018466 4745 scope.go:117] "RemoveContainer" containerID="55c06e0c9fe1cb3e41cbaaf688692b1d99ed8e65f23aa9f0c8994930fe54a0a7" Jan 27 13:01:13 crc kubenswrapper[4745]: E0127 13:01:13.018858 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55c06e0c9fe1cb3e41cbaaf688692b1d99ed8e65f23aa9f0c8994930fe54a0a7\": container with ID starting with 55c06e0c9fe1cb3e41cbaaf688692b1d99ed8e65f23aa9f0c8994930fe54a0a7 not found: ID does not exist" containerID="55c06e0c9fe1cb3e41cbaaf688692b1d99ed8e65f23aa9f0c8994930fe54a0a7" Jan 27 13:01:13 crc kubenswrapper[4745]: I0127 13:01:13.018889 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55c06e0c9fe1cb3e41cbaaf688692b1d99ed8e65f23aa9f0c8994930fe54a0a7"} err="failed to get container status \"55c06e0c9fe1cb3e41cbaaf688692b1d99ed8e65f23aa9f0c8994930fe54a0a7\": rpc error: code = NotFound desc = could not find container \"55c06e0c9fe1cb3e41cbaaf688692b1d99ed8e65f23aa9f0c8994930fe54a0a7\": container with ID starting with 55c06e0c9fe1cb3e41cbaaf688692b1d99ed8e65f23aa9f0c8994930fe54a0a7 not found: ID does not exist" Jan 27 13:01:14 crc kubenswrapper[4745]: I0127 13:01:14.085157 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3167571-2811-45ab-9830-58f9f5ba890f" path="/var/lib/kubelet/pods/d3167571-2811-45ab-9830-58f9f5ba890f/volumes" Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.063892 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b8m7r"] Jan 27 13:02:07 crc kubenswrapper[4745]: E0127 13:02:07.066600 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db65d64c-c028-4048-9d34-623647d32012" containerName="extract-utilities" Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.066726 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="db65d64c-c028-4048-9d34-623647d32012" containerName="extract-utilities" Jan 27 13:02:07 crc kubenswrapper[4745]: E0127 13:02:07.066838 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db65d64c-c028-4048-9d34-623647d32012" containerName="registry-server" Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.066929 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="db65d64c-c028-4048-9d34-623647d32012" containerName="registry-server" Jan 27 13:02:07 crc kubenswrapper[4745]: E0127 13:02:07.067023 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3167571-2811-45ab-9830-58f9f5ba890f" containerName="extract-utilities" Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.067134 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3167571-2811-45ab-9830-58f9f5ba890f" containerName="extract-utilities" Jan 27 13:02:07 crc kubenswrapper[4745]: E0127 13:02:07.067287 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3167571-2811-45ab-9830-58f9f5ba890f" containerName="registry-server" Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.067410 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3167571-2811-45ab-9830-58f9f5ba890f" containerName="registry-server" Jan 27 13:02:07 crc kubenswrapper[4745]: E0127 13:02:07.067541 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3167571-2811-45ab-9830-58f9f5ba890f" containerName="extract-content" Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.067639 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3167571-2811-45ab-9830-58f9f5ba890f" containerName="extract-content" Jan 27 13:02:07 crc kubenswrapper[4745]: E0127 13:02:07.067736 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db65d64c-c028-4048-9d34-623647d32012" containerName="extract-content" Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.067867 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="db65d64c-c028-4048-9d34-623647d32012" containerName="extract-content" Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.068315 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3167571-2811-45ab-9830-58f9f5ba890f" containerName="registry-server" Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.068358 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="db65d64c-c028-4048-9d34-623647d32012" containerName="registry-server" Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.069515 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8m7r" Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.082467 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b8m7r"] Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.086883 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/530f98f5-8215-494d-ab6b-5b1807d779a5-utilities\") pod \"redhat-operators-b8m7r\" (UID: \"530f98f5-8215-494d-ab6b-5b1807d779a5\") " pod="openshift-marketplace/redhat-operators-b8m7r" Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.087160 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5z82\" (UniqueName: \"kubernetes.io/projected/530f98f5-8215-494d-ab6b-5b1807d779a5-kube-api-access-m5z82\") pod \"redhat-operators-b8m7r\" (UID: \"530f98f5-8215-494d-ab6b-5b1807d779a5\") " pod="openshift-marketplace/redhat-operators-b8m7r" Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.087261 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/530f98f5-8215-494d-ab6b-5b1807d779a5-catalog-content\") pod \"redhat-operators-b8m7r\" (UID: \"530f98f5-8215-494d-ab6b-5b1807d779a5\") " pod="openshift-marketplace/redhat-operators-b8m7r" Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.188244 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/530f98f5-8215-494d-ab6b-5b1807d779a5-catalog-content\") pod \"redhat-operators-b8m7r\" (UID: \"530f98f5-8215-494d-ab6b-5b1807d779a5\") " pod="openshift-marketplace/redhat-operators-b8m7r" Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.188756 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/530f98f5-8215-494d-ab6b-5b1807d779a5-catalog-content\") pod \"redhat-operators-b8m7r\" (UID: \"530f98f5-8215-494d-ab6b-5b1807d779a5\") " pod="openshift-marketplace/redhat-operators-b8m7r" Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.188802 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/530f98f5-8215-494d-ab6b-5b1807d779a5-utilities\") pod \"redhat-operators-b8m7r\" (UID: \"530f98f5-8215-494d-ab6b-5b1807d779a5\") " pod="openshift-marketplace/redhat-operators-b8m7r" Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.188887 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5z82\" (UniqueName: \"kubernetes.io/projected/530f98f5-8215-494d-ab6b-5b1807d779a5-kube-api-access-m5z82\") pod \"redhat-operators-b8m7r\" (UID: \"530f98f5-8215-494d-ab6b-5b1807d779a5\") " pod="openshift-marketplace/redhat-operators-b8m7r" Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.189269 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/530f98f5-8215-494d-ab6b-5b1807d779a5-utilities\") pod \"redhat-operators-b8m7r\" (UID: \"530f98f5-8215-494d-ab6b-5b1807d779a5\") " pod="openshift-marketplace/redhat-operators-b8m7r" Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.208364 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5z82\" (UniqueName: \"kubernetes.io/projected/530f98f5-8215-494d-ab6b-5b1807d779a5-kube-api-access-m5z82\") pod \"redhat-operators-b8m7r\" (UID: \"530f98f5-8215-494d-ab6b-5b1807d779a5\") " pod="openshift-marketplace/redhat-operators-b8m7r" Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.410512 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8m7r" Jan 27 13:02:07 crc kubenswrapper[4745]: I0127 13:02:07.853623 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b8m7r"] Jan 27 13:02:08 crc kubenswrapper[4745]: I0127 13:02:08.398098 4745 generic.go:334] "Generic (PLEG): container finished" podID="530f98f5-8215-494d-ab6b-5b1807d779a5" containerID="11a29caad332b029f4438298f877d766c17b7868b2df794de628fd69f998c76a" exitCode=0 Jan 27 13:02:08 crc kubenswrapper[4745]: I0127 13:02:08.398156 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8m7r" event={"ID":"530f98f5-8215-494d-ab6b-5b1807d779a5","Type":"ContainerDied","Data":"11a29caad332b029f4438298f877d766c17b7868b2df794de628fd69f998c76a"} Jan 27 13:02:08 crc kubenswrapper[4745]: I0127 13:02:08.398189 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8m7r" event={"ID":"530f98f5-8215-494d-ab6b-5b1807d779a5","Type":"ContainerStarted","Data":"506075dc546f74b6c56ca7e89ee02f2e0a62bc6a6a8ecd408ed3bb9a62e44511"} Jan 27 13:02:09 crc kubenswrapper[4745]: I0127 13:02:09.406699 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8m7r" event={"ID":"530f98f5-8215-494d-ab6b-5b1807d779a5","Type":"ContainerStarted","Data":"cc8bc637b00a0de705bc10e817b1a04d9118fe4d23245fd285f919c6c0014ed4"} Jan 27 13:02:09 crc kubenswrapper[4745]: I0127 13:02:09.469950 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w7cqd"] Jan 27 13:02:09 crc kubenswrapper[4745]: I0127 13:02:09.471825 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w7cqd" Jan 27 13:02:09 crc kubenswrapper[4745]: I0127 13:02:09.489285 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w7cqd"] Jan 27 13:02:09 crc kubenswrapper[4745]: I0127 13:02:09.554165 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76z9k\" (UniqueName: \"kubernetes.io/projected/c64c60a4-d5a0-4a29-8866-2e98d7b223e1-kube-api-access-76z9k\") pod \"certified-operators-w7cqd\" (UID: \"c64c60a4-d5a0-4a29-8866-2e98d7b223e1\") " pod="openshift-marketplace/certified-operators-w7cqd" Jan 27 13:02:09 crc kubenswrapper[4745]: I0127 13:02:09.554219 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64c60a4-d5a0-4a29-8866-2e98d7b223e1-utilities\") pod \"certified-operators-w7cqd\" (UID: \"c64c60a4-d5a0-4a29-8866-2e98d7b223e1\") " pod="openshift-marketplace/certified-operators-w7cqd" Jan 27 13:02:09 crc kubenswrapper[4745]: I0127 13:02:09.554350 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64c60a4-d5a0-4a29-8866-2e98d7b223e1-catalog-content\") pod \"certified-operators-w7cqd\" (UID: \"c64c60a4-d5a0-4a29-8866-2e98d7b223e1\") " pod="openshift-marketplace/certified-operators-w7cqd" Jan 27 13:02:09 crc kubenswrapper[4745]: I0127 13:02:09.655182 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64c60a4-d5a0-4a29-8866-2e98d7b223e1-catalog-content\") pod \"certified-operators-w7cqd\" (UID: \"c64c60a4-d5a0-4a29-8866-2e98d7b223e1\") " pod="openshift-marketplace/certified-operators-w7cqd" Jan 27 13:02:09 crc kubenswrapper[4745]: I0127 13:02:09.655283 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76z9k\" (UniqueName: \"kubernetes.io/projected/c64c60a4-d5a0-4a29-8866-2e98d7b223e1-kube-api-access-76z9k\") pod \"certified-operators-w7cqd\" (UID: \"c64c60a4-d5a0-4a29-8866-2e98d7b223e1\") " pod="openshift-marketplace/certified-operators-w7cqd" Jan 27 13:02:09 crc kubenswrapper[4745]: I0127 13:02:09.655310 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64c60a4-d5a0-4a29-8866-2e98d7b223e1-utilities\") pod \"certified-operators-w7cqd\" (UID: \"c64c60a4-d5a0-4a29-8866-2e98d7b223e1\") " pod="openshift-marketplace/certified-operators-w7cqd" Jan 27 13:02:09 crc kubenswrapper[4745]: I0127 13:02:09.655831 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64c60a4-d5a0-4a29-8866-2e98d7b223e1-utilities\") pod \"certified-operators-w7cqd\" (UID: \"c64c60a4-d5a0-4a29-8866-2e98d7b223e1\") " pod="openshift-marketplace/certified-operators-w7cqd" Jan 27 13:02:09 crc kubenswrapper[4745]: I0127 13:02:09.656049 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64c60a4-d5a0-4a29-8866-2e98d7b223e1-catalog-content\") pod \"certified-operators-w7cqd\" (UID: \"c64c60a4-d5a0-4a29-8866-2e98d7b223e1\") " pod="openshift-marketplace/certified-operators-w7cqd" Jan 27 13:02:09 crc kubenswrapper[4745]: I0127 13:02:09.675163 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76z9k\" (UniqueName: \"kubernetes.io/projected/c64c60a4-d5a0-4a29-8866-2e98d7b223e1-kube-api-access-76z9k\") pod \"certified-operators-w7cqd\" (UID: \"c64c60a4-d5a0-4a29-8866-2e98d7b223e1\") " pod="openshift-marketplace/certified-operators-w7cqd" Jan 27 13:02:09 crc kubenswrapper[4745]: I0127 13:02:09.787769 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w7cqd" Jan 27 13:02:10 crc kubenswrapper[4745]: I0127 13:02:10.163750 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w7cqd"] Jan 27 13:02:10 crc kubenswrapper[4745]: I0127 13:02:10.415872 4745 generic.go:334] "Generic (PLEG): container finished" podID="530f98f5-8215-494d-ab6b-5b1807d779a5" containerID="cc8bc637b00a0de705bc10e817b1a04d9118fe4d23245fd285f919c6c0014ed4" exitCode=0 Jan 27 13:02:10 crc kubenswrapper[4745]: I0127 13:02:10.416255 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8m7r" event={"ID":"530f98f5-8215-494d-ab6b-5b1807d779a5","Type":"ContainerDied","Data":"cc8bc637b00a0de705bc10e817b1a04d9118fe4d23245fd285f919c6c0014ed4"} Jan 27 13:02:10 crc kubenswrapper[4745]: I0127 13:02:10.419464 4745 generic.go:334] "Generic (PLEG): container finished" podID="c64c60a4-d5a0-4a29-8866-2e98d7b223e1" containerID="b9858a9e7db84bfbc338455daac42a405cd7083526aa18ea75cb51f8401b2ecb" exitCode=0 Jan 27 13:02:10 crc kubenswrapper[4745]: I0127 13:02:10.419494 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7cqd" event={"ID":"c64c60a4-d5a0-4a29-8866-2e98d7b223e1","Type":"ContainerDied","Data":"b9858a9e7db84bfbc338455daac42a405cd7083526aa18ea75cb51f8401b2ecb"} Jan 27 13:02:10 crc kubenswrapper[4745]: I0127 13:02:10.419517 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7cqd" event={"ID":"c64c60a4-d5a0-4a29-8866-2e98d7b223e1","Type":"ContainerStarted","Data":"7d6019ca9e2a8e1d79cce09329479adaa9ac368064e5e91327edfdde928b5b02"} Jan 27 13:02:11 crc kubenswrapper[4745]: I0127 13:02:11.428520 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8m7r" event={"ID":"530f98f5-8215-494d-ab6b-5b1807d779a5","Type":"ContainerStarted","Data":"de2ecc12b9072ef30319ee02d1492ff9dbc88a1e5addf2d4984d7576eb88998f"} Jan 27 13:02:11 crc kubenswrapper[4745]: I0127 13:02:11.450937 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b8m7r" podStartSLOduration=1.8139126559999998 podStartE2EDuration="4.450918269s" podCreationTimestamp="2026-01-27 13:02:07 +0000 UTC" firstStartedPulling="2026-01-27 13:02:08.400035778 +0000 UTC m=+3021.204946476" lastFinishedPulling="2026-01-27 13:02:11.037041401 +0000 UTC m=+3023.841952089" observedRunningTime="2026-01-27 13:02:11.450314982 +0000 UTC m=+3024.255225700" watchObservedRunningTime="2026-01-27 13:02:11.450918269 +0000 UTC m=+3024.255828957" Jan 27 13:02:12 crc kubenswrapper[4745]: I0127 13:02:12.440273 4745 generic.go:334] "Generic (PLEG): container finished" podID="c64c60a4-d5a0-4a29-8866-2e98d7b223e1" containerID="2e0db8de98ef13c8e142e1f2fec3c19c9f1921c7cd7d8b58148c044c98577258" exitCode=0 Jan 27 13:02:12 crc kubenswrapper[4745]: I0127 13:02:12.440369 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7cqd" event={"ID":"c64c60a4-d5a0-4a29-8866-2e98d7b223e1","Type":"ContainerDied","Data":"2e0db8de98ef13c8e142e1f2fec3c19c9f1921c7cd7d8b58148c044c98577258"} Jan 27 13:02:13 crc kubenswrapper[4745]: I0127 13:02:13.448622 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7cqd" event={"ID":"c64c60a4-d5a0-4a29-8866-2e98d7b223e1","Type":"ContainerStarted","Data":"44a1dfdd571b3365f728057d60d5ef2a9b533c4b9168ba95397aef035a44a257"} Jan 27 13:02:13 crc kubenswrapper[4745]: I0127 13:02:13.469437 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w7cqd" podStartSLOduration=1.753972665 podStartE2EDuration="4.46942059s" podCreationTimestamp="2026-01-27 13:02:09 +0000 UTC" firstStartedPulling="2026-01-27 13:02:10.420768602 +0000 UTC m=+3023.225679290" lastFinishedPulling="2026-01-27 13:02:13.136216527 +0000 UTC m=+3025.941127215" observedRunningTime="2026-01-27 13:02:13.466746005 +0000 UTC m=+3026.271656693" watchObservedRunningTime="2026-01-27 13:02:13.46942059 +0000 UTC m=+3026.274331278" Jan 27 13:02:17 crc kubenswrapper[4745]: I0127 13:02:17.410734 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b8m7r" Jan 27 13:02:17 crc kubenswrapper[4745]: I0127 13:02:17.411078 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b8m7r" Jan 27 13:02:17 crc kubenswrapper[4745]: I0127 13:02:17.474678 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b8m7r" Jan 27 13:02:17 crc kubenswrapper[4745]: I0127 13:02:17.538372 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b8m7r" Jan 27 13:02:17 crc kubenswrapper[4745]: I0127 13:02:17.851144 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b8m7r"] Jan 27 13:02:19 crc kubenswrapper[4745]: I0127 13:02:19.512712 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b8m7r" podUID="530f98f5-8215-494d-ab6b-5b1807d779a5" containerName="registry-server" containerID="cri-o://de2ecc12b9072ef30319ee02d1492ff9dbc88a1e5addf2d4984d7576eb88998f" gracePeriod=2 Jan 27 13:02:19 crc kubenswrapper[4745]: I0127 13:02:19.789181 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w7cqd" Jan 27 13:02:19 crc kubenswrapper[4745]: I0127 13:02:19.789249 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w7cqd" Jan 27 13:02:19 crc kubenswrapper[4745]: I0127 13:02:19.838179 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w7cqd" Jan 27 13:02:20 crc kubenswrapper[4745]: I0127 13:02:20.522662 4745 generic.go:334] "Generic (PLEG): container finished" podID="530f98f5-8215-494d-ab6b-5b1807d779a5" containerID="de2ecc12b9072ef30319ee02d1492ff9dbc88a1e5addf2d4984d7576eb88998f" exitCode=0 Jan 27 13:02:20 crc kubenswrapper[4745]: I0127 13:02:20.522712 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8m7r" event={"ID":"530f98f5-8215-494d-ab6b-5b1807d779a5","Type":"ContainerDied","Data":"de2ecc12b9072ef30319ee02d1492ff9dbc88a1e5addf2d4984d7576eb88998f"} Jan 27 13:02:20 crc kubenswrapper[4745]: I0127 13:02:20.570605 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w7cqd" Jan 27 13:02:20 crc kubenswrapper[4745]: I0127 13:02:20.604051 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8m7r" Jan 27 13:02:20 crc kubenswrapper[4745]: I0127 13:02:20.632181 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5z82\" (UniqueName: \"kubernetes.io/projected/530f98f5-8215-494d-ab6b-5b1807d779a5-kube-api-access-m5z82\") pod \"530f98f5-8215-494d-ab6b-5b1807d779a5\" (UID: \"530f98f5-8215-494d-ab6b-5b1807d779a5\") " Jan 27 13:02:20 crc kubenswrapper[4745]: I0127 13:02:20.632309 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/530f98f5-8215-494d-ab6b-5b1807d779a5-utilities\") pod \"530f98f5-8215-494d-ab6b-5b1807d779a5\" (UID: \"530f98f5-8215-494d-ab6b-5b1807d779a5\") " Jan 27 13:02:20 crc kubenswrapper[4745]: I0127 13:02:20.632450 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/530f98f5-8215-494d-ab6b-5b1807d779a5-catalog-content\") pod \"530f98f5-8215-494d-ab6b-5b1807d779a5\" (UID: \"530f98f5-8215-494d-ab6b-5b1807d779a5\") " Jan 27 13:02:20 crc kubenswrapper[4745]: I0127 13:02:20.634610 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/530f98f5-8215-494d-ab6b-5b1807d779a5-utilities" (OuterVolumeSpecName: "utilities") pod "530f98f5-8215-494d-ab6b-5b1807d779a5" (UID: "530f98f5-8215-494d-ab6b-5b1807d779a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:02:20 crc kubenswrapper[4745]: I0127 13:02:20.643307 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/530f98f5-8215-494d-ab6b-5b1807d779a5-kube-api-access-m5z82" (OuterVolumeSpecName: "kube-api-access-m5z82") pod "530f98f5-8215-494d-ab6b-5b1807d779a5" (UID: "530f98f5-8215-494d-ab6b-5b1807d779a5"). InnerVolumeSpecName "kube-api-access-m5z82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:02:20 crc kubenswrapper[4745]: I0127 13:02:20.734856 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5z82\" (UniqueName: \"kubernetes.io/projected/530f98f5-8215-494d-ab6b-5b1807d779a5-kube-api-access-m5z82\") on node \"crc\" DevicePath \"\"" Jan 27 13:02:20 crc kubenswrapper[4745]: I0127 13:02:20.734899 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/530f98f5-8215-494d-ab6b-5b1807d779a5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:02:21 crc kubenswrapper[4745]: I0127 13:02:21.541055 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8m7r" event={"ID":"530f98f5-8215-494d-ab6b-5b1807d779a5","Type":"ContainerDied","Data":"506075dc546f74b6c56ca7e89ee02f2e0a62bc6a6a8ecd408ed3bb9a62e44511"} Jan 27 13:02:21 crc kubenswrapper[4745]: I0127 13:02:21.541121 4745 scope.go:117] "RemoveContainer" containerID="de2ecc12b9072ef30319ee02d1492ff9dbc88a1e5addf2d4984d7576eb88998f" Jan 27 13:02:21 crc kubenswrapper[4745]: I0127 13:02:21.541142 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8m7r" Jan 27 13:02:21 crc kubenswrapper[4745]: I0127 13:02:21.569191 4745 scope.go:117] "RemoveContainer" containerID="cc8bc637b00a0de705bc10e817b1a04d9118fe4d23245fd285f919c6c0014ed4" Jan 27 13:02:21 crc kubenswrapper[4745]: I0127 13:02:21.598525 4745 scope.go:117] "RemoveContainer" containerID="11a29caad332b029f4438298f877d766c17b7868b2df794de628fd69f998c76a" Jan 27 13:02:21 crc kubenswrapper[4745]: I0127 13:02:21.851194 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w7cqd"] Jan 27 13:02:22 crc kubenswrapper[4745]: I0127 13:02:22.221493 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/530f98f5-8215-494d-ab6b-5b1807d779a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "530f98f5-8215-494d-ab6b-5b1807d779a5" (UID: "530f98f5-8215-494d-ab6b-5b1807d779a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:02:22 crc kubenswrapper[4745]: I0127 13:02:22.262287 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/530f98f5-8215-494d-ab6b-5b1807d779a5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:02:22 crc kubenswrapper[4745]: I0127 13:02:22.479546 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b8m7r"] Jan 27 13:02:22 crc kubenswrapper[4745]: I0127 13:02:22.485006 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b8m7r"] Jan 27 13:02:22 crc kubenswrapper[4745]: I0127 13:02:22.550071 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w7cqd" podUID="c64c60a4-d5a0-4a29-8866-2e98d7b223e1" containerName="registry-server" containerID="cri-o://44a1dfdd571b3365f728057d60d5ef2a9b533c4b9168ba95397aef035a44a257" gracePeriod=2 Jan 27 13:02:23 crc kubenswrapper[4745]: I0127 13:02:23.561470 4745 generic.go:334] "Generic (PLEG): container finished" podID="c64c60a4-d5a0-4a29-8866-2e98d7b223e1" containerID="44a1dfdd571b3365f728057d60d5ef2a9b533c4b9168ba95397aef035a44a257" exitCode=0 Jan 27 13:02:23 crc kubenswrapper[4745]: I0127 13:02:23.561582 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7cqd" event={"ID":"c64c60a4-d5a0-4a29-8866-2e98d7b223e1","Type":"ContainerDied","Data":"44a1dfdd571b3365f728057d60d5ef2a9b533c4b9168ba95397aef035a44a257"} Jan 27 13:02:24 crc kubenswrapper[4745]: I0127 13:02:24.086094 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="530f98f5-8215-494d-ab6b-5b1807d779a5" path="/var/lib/kubelet/pods/530f98f5-8215-494d-ab6b-5b1807d779a5/volumes" Jan 27 13:02:24 crc kubenswrapper[4745]: I0127 13:02:24.158529 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w7cqd" Jan 27 13:02:24 crc kubenswrapper[4745]: I0127 13:02:24.305635 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64c60a4-d5a0-4a29-8866-2e98d7b223e1-catalog-content\") pod \"c64c60a4-d5a0-4a29-8866-2e98d7b223e1\" (UID: \"c64c60a4-d5a0-4a29-8866-2e98d7b223e1\") " Jan 27 13:02:24 crc kubenswrapper[4745]: I0127 13:02:24.305854 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76z9k\" (UniqueName: \"kubernetes.io/projected/c64c60a4-d5a0-4a29-8866-2e98d7b223e1-kube-api-access-76z9k\") pod \"c64c60a4-d5a0-4a29-8866-2e98d7b223e1\" (UID: \"c64c60a4-d5a0-4a29-8866-2e98d7b223e1\") " Jan 27 13:02:24 crc kubenswrapper[4745]: I0127 13:02:24.305916 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64c60a4-d5a0-4a29-8866-2e98d7b223e1-utilities\") pod \"c64c60a4-d5a0-4a29-8866-2e98d7b223e1\" (UID: \"c64c60a4-d5a0-4a29-8866-2e98d7b223e1\") " Jan 27 13:02:24 crc kubenswrapper[4745]: I0127 13:02:24.306797 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c64c60a4-d5a0-4a29-8866-2e98d7b223e1-utilities" (OuterVolumeSpecName: "utilities") pod "c64c60a4-d5a0-4a29-8866-2e98d7b223e1" (UID: "c64c60a4-d5a0-4a29-8866-2e98d7b223e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:02:24 crc kubenswrapper[4745]: I0127 13:02:24.310656 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c64c60a4-d5a0-4a29-8866-2e98d7b223e1-kube-api-access-76z9k" (OuterVolumeSpecName: "kube-api-access-76z9k") pod "c64c60a4-d5a0-4a29-8866-2e98d7b223e1" (UID: "c64c60a4-d5a0-4a29-8866-2e98d7b223e1"). InnerVolumeSpecName "kube-api-access-76z9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:02:24 crc kubenswrapper[4745]: I0127 13:02:24.349047 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c64c60a4-d5a0-4a29-8866-2e98d7b223e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c64c60a4-d5a0-4a29-8866-2e98d7b223e1" (UID: "c64c60a4-d5a0-4a29-8866-2e98d7b223e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:02:24 crc kubenswrapper[4745]: I0127 13:02:24.407293 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76z9k\" (UniqueName: \"kubernetes.io/projected/c64c60a4-d5a0-4a29-8866-2e98d7b223e1-kube-api-access-76z9k\") on node \"crc\" DevicePath \"\"" Jan 27 13:02:24 crc kubenswrapper[4745]: I0127 13:02:24.407345 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64c60a4-d5a0-4a29-8866-2e98d7b223e1-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:02:24 crc kubenswrapper[4745]: I0127 13:02:24.407360 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64c60a4-d5a0-4a29-8866-2e98d7b223e1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:02:24 crc kubenswrapper[4745]: I0127 13:02:24.572948 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7cqd" event={"ID":"c64c60a4-d5a0-4a29-8866-2e98d7b223e1","Type":"ContainerDied","Data":"7d6019ca9e2a8e1d79cce09329479adaa9ac368064e5e91327edfdde928b5b02"} Jan 27 13:02:24 crc kubenswrapper[4745]: I0127 13:02:24.573042 4745 scope.go:117] "RemoveContainer" containerID="44a1dfdd571b3365f728057d60d5ef2a9b533c4b9168ba95397aef035a44a257" Jan 27 13:02:24 crc kubenswrapper[4745]: I0127 13:02:24.573267 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w7cqd" Jan 27 13:02:24 crc kubenswrapper[4745]: I0127 13:02:24.601284 4745 scope.go:117] "RemoveContainer" containerID="2e0db8de98ef13c8e142e1f2fec3c19c9f1921c7cd7d8b58148c044c98577258" Jan 27 13:02:24 crc kubenswrapper[4745]: I0127 13:02:24.621304 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w7cqd"] Jan 27 13:02:24 crc kubenswrapper[4745]: I0127 13:02:24.626297 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w7cqd"] Jan 27 13:02:24 crc kubenswrapper[4745]: I0127 13:02:24.643302 4745 scope.go:117] "RemoveContainer" containerID="b9858a9e7db84bfbc338455daac42a405cd7083526aa18ea75cb51f8401b2ecb" Jan 27 13:02:26 crc kubenswrapper[4745]: I0127 13:02:26.085790 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c64c60a4-d5a0-4a29-8866-2e98d7b223e1" path="/var/lib/kubelet/pods/c64c60a4-d5a0-4a29-8866-2e98d7b223e1/volumes" Jan 27 13:03:35 crc kubenswrapper[4745]: I0127 13:03:35.967450 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:03:35 crc kubenswrapper[4745]: I0127 13:03:35.968032 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:04:05 crc kubenswrapper[4745]: I0127 13:04:05.967329 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:04:05 crc kubenswrapper[4745]: I0127 13:04:05.968034 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:04:35 crc kubenswrapper[4745]: I0127 13:04:35.967530 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:04:35 crc kubenswrapper[4745]: I0127 13:04:35.968182 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:04:35 crc kubenswrapper[4745]: I0127 13:04:35.968232 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 13:04:35 crc kubenswrapper[4745]: I0127 13:04:35.968789 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048"} pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:04:35 crc kubenswrapper[4745]: I0127 13:04:35.968870 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" containerID="cri-o://45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" gracePeriod=600 Jan 27 13:04:36 crc kubenswrapper[4745]: E0127 13:04:36.101411 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:04:36 crc kubenswrapper[4745]: I0127 13:04:36.620355 4745 generic.go:334] "Generic (PLEG): container finished" podID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" exitCode=0 Jan 27 13:04:36 crc kubenswrapper[4745]: I0127 13:04:36.620634 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerDied","Data":"45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048"} Jan 27 13:04:36 crc kubenswrapper[4745]: I0127 13:04:36.620676 4745 scope.go:117] "RemoveContainer" containerID="13887eb0088f3e5d43d51e708ce4f207dcbfd90318c899c53ec115321a09107c" Jan 27 13:04:36 crc kubenswrapper[4745]: I0127 13:04:36.621200 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:04:36 crc kubenswrapper[4745]: E0127 13:04:36.621396 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:04:52 crc kubenswrapper[4745]: I0127 13:04:52.073838 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:04:52 crc kubenswrapper[4745]: E0127 13:04:52.074590 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:05:05 crc kubenswrapper[4745]: I0127 13:05:05.073960 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:05:05 crc kubenswrapper[4745]: E0127 13:05:05.074683 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:05:16 crc kubenswrapper[4745]: I0127 13:05:16.074119 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:05:16 crc kubenswrapper[4745]: E0127 13:05:16.075057 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:05:29 crc kubenswrapper[4745]: I0127 13:05:29.073304 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:05:29 crc kubenswrapper[4745]: E0127 13:05:29.074050 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:05:44 crc kubenswrapper[4745]: I0127 13:05:44.073660 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:05:44 crc kubenswrapper[4745]: E0127 13:05:44.074391 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:05:59 crc kubenswrapper[4745]: I0127 13:05:59.074410 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:05:59 crc kubenswrapper[4745]: E0127 13:05:59.075286 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:06:11 crc kubenswrapper[4745]: I0127 13:06:11.074278 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:06:11 crc kubenswrapper[4745]: E0127 13:06:11.075402 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:06:23 crc kubenswrapper[4745]: I0127 13:06:23.074367 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:06:23 crc kubenswrapper[4745]: E0127 13:06:23.075548 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:06:37 crc kubenswrapper[4745]: I0127 13:06:37.074007 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:06:37 crc kubenswrapper[4745]: E0127 13:06:37.074704 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:06:49 crc kubenswrapper[4745]: I0127 13:06:49.073900 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:06:49 crc kubenswrapper[4745]: E0127 13:06:49.074571 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:07:02 crc kubenswrapper[4745]: I0127 13:07:02.074340 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:07:02 crc kubenswrapper[4745]: E0127 13:07:02.075090 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:07:13 crc kubenswrapper[4745]: I0127 13:07:13.073882 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:07:13 crc kubenswrapper[4745]: E0127 13:07:13.074420 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:07:28 crc kubenswrapper[4745]: I0127 13:07:28.079707 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:07:28 crc kubenswrapper[4745]: E0127 13:07:28.080582 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:07:41 crc kubenswrapper[4745]: I0127 13:07:41.073422 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:07:41 crc kubenswrapper[4745]: E0127 13:07:41.074160 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:07:56 crc kubenswrapper[4745]: I0127 13:07:56.073633 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:07:56 crc kubenswrapper[4745]: E0127 13:07:56.074449 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:08:08 crc kubenswrapper[4745]: I0127 13:08:08.079153 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:08:08 crc kubenswrapper[4745]: E0127 13:08:08.079978 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:08:22 crc kubenswrapper[4745]: I0127 13:08:22.074006 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:08:22 crc kubenswrapper[4745]: E0127 13:08:22.074912 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:08:35 crc kubenswrapper[4745]: I0127 13:08:35.073724 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:08:35 crc kubenswrapper[4745]: E0127 13:08:35.074620 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:08:47 crc kubenswrapper[4745]: I0127 13:08:47.073720 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:08:47 crc kubenswrapper[4745]: E0127 13:08:47.074386 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:09:01 crc kubenswrapper[4745]: I0127 13:09:01.074453 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:09:01 crc kubenswrapper[4745]: E0127 13:09:01.075013 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:09:13 crc kubenswrapper[4745]: I0127 13:09:13.073682 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:09:13 crc kubenswrapper[4745]: E0127 13:09:13.074509 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:09:25 crc kubenswrapper[4745]: I0127 13:09:25.074596 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:09:25 crc kubenswrapper[4745]: E0127 13:09:25.075402 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:09:37 crc kubenswrapper[4745]: I0127 13:09:37.074215 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:09:37 crc kubenswrapper[4745]: I0127 13:09:37.923765 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerStarted","Data":"d938299b6752fac083ad77a1edce2a879b62760f1a87f7148a931dd6e44908ee"} Jan 27 13:10:52 crc kubenswrapper[4745]: I0127 13:10:52.325005 4745 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-5gvmk container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.48:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 13:10:52 crc kubenswrapper[4745]: I0127 13:10:52.325632 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5gvmk" podUID="6864b9ac-a4d6-46c5-b994-9710da668093" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.48:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 13:11:47 crc kubenswrapper[4745]: I0127 13:11:47.220273 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-54kch"] Jan 27 13:11:47 crc kubenswrapper[4745]: E0127 13:11:47.221237 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c64c60a4-d5a0-4a29-8866-2e98d7b223e1" containerName="extract-content" Jan 27 13:11:47 crc kubenswrapper[4745]: I0127 13:11:47.221252 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c64c60a4-d5a0-4a29-8866-2e98d7b223e1" containerName="extract-content" Jan 27 13:11:47 crc kubenswrapper[4745]: E0127 13:11:47.221262 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c64c60a4-d5a0-4a29-8866-2e98d7b223e1" containerName="extract-utilities" Jan 27 13:11:47 crc kubenswrapper[4745]: I0127 13:11:47.221269 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c64c60a4-d5a0-4a29-8866-2e98d7b223e1" containerName="extract-utilities" Jan 27 13:11:47 crc kubenswrapper[4745]: E0127 13:11:47.221284 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="530f98f5-8215-494d-ab6b-5b1807d779a5" containerName="extract-utilities" Jan 27 13:11:47 crc kubenswrapper[4745]: I0127 13:11:47.221290 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="530f98f5-8215-494d-ab6b-5b1807d779a5" containerName="extract-utilities" Jan 27 13:11:47 crc kubenswrapper[4745]: E0127 13:11:47.221305 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="530f98f5-8215-494d-ab6b-5b1807d779a5" containerName="extract-content" Jan 27 13:11:47 crc kubenswrapper[4745]: I0127 13:11:47.221311 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="530f98f5-8215-494d-ab6b-5b1807d779a5" containerName="extract-content" Jan 27 13:11:47 crc kubenswrapper[4745]: E0127 13:11:47.221324 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="530f98f5-8215-494d-ab6b-5b1807d779a5" containerName="registry-server" Jan 27 13:11:47 crc kubenswrapper[4745]: I0127 13:11:47.221329 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="530f98f5-8215-494d-ab6b-5b1807d779a5" containerName="registry-server" Jan 27 13:11:47 crc kubenswrapper[4745]: E0127 13:11:47.221341 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c64c60a4-d5a0-4a29-8866-2e98d7b223e1" containerName="registry-server" Jan 27 13:11:47 crc kubenswrapper[4745]: I0127 13:11:47.221347 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c64c60a4-d5a0-4a29-8866-2e98d7b223e1" containerName="registry-server" Jan 27 13:11:47 crc kubenswrapper[4745]: I0127 13:11:47.221464 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="c64c60a4-d5a0-4a29-8866-2e98d7b223e1" containerName="registry-server" Jan 27 13:11:47 crc kubenswrapper[4745]: I0127 13:11:47.221483 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="530f98f5-8215-494d-ab6b-5b1807d779a5" containerName="registry-server" Jan 27 13:11:47 crc kubenswrapper[4745]: I0127 13:11:47.222640 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54kch" Jan 27 13:11:47 crc kubenswrapper[4745]: I0127 13:11:47.235102 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5b2813-fb4b-4525-a35e-0c80c92d0542-catalog-content\") pod \"community-operators-54kch\" (UID: \"cc5b2813-fb4b-4525-a35e-0c80c92d0542\") " pod="openshift-marketplace/community-operators-54kch" Jan 27 13:11:47 crc kubenswrapper[4745]: I0127 13:11:47.235183 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5b2813-fb4b-4525-a35e-0c80c92d0542-utilities\") pod \"community-operators-54kch\" (UID: \"cc5b2813-fb4b-4525-a35e-0c80c92d0542\") " pod="openshift-marketplace/community-operators-54kch" Jan 27 13:11:47 crc kubenswrapper[4745]: I0127 13:11:47.235553 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxv7j\" (UniqueName: \"kubernetes.io/projected/cc5b2813-fb4b-4525-a35e-0c80c92d0542-kube-api-access-lxv7j\") pod \"community-operators-54kch\" (UID: \"cc5b2813-fb4b-4525-a35e-0c80c92d0542\") " pod="openshift-marketplace/community-operators-54kch" Jan 27 13:11:47 crc kubenswrapper[4745]: I0127 13:11:47.242909 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-54kch"] Jan 27 13:11:47 crc kubenswrapper[4745]: I0127 13:11:47.336466 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5b2813-fb4b-4525-a35e-0c80c92d0542-utilities\") pod \"community-operators-54kch\" (UID: \"cc5b2813-fb4b-4525-a35e-0c80c92d0542\") " pod="openshift-marketplace/community-operators-54kch" Jan 27 13:11:47 crc kubenswrapper[4745]: I0127 13:11:47.336676 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxv7j\" (UniqueName: \"kubernetes.io/projected/cc5b2813-fb4b-4525-a35e-0c80c92d0542-kube-api-access-lxv7j\") pod \"community-operators-54kch\" (UID: \"cc5b2813-fb4b-4525-a35e-0c80c92d0542\") " pod="openshift-marketplace/community-operators-54kch" Jan 27 13:11:47 crc kubenswrapper[4745]: I0127 13:11:47.336733 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5b2813-fb4b-4525-a35e-0c80c92d0542-catalog-content\") pod \"community-operators-54kch\" (UID: \"cc5b2813-fb4b-4525-a35e-0c80c92d0542\") " pod="openshift-marketplace/community-operators-54kch" Jan 27 13:11:47 crc kubenswrapper[4745]: I0127 13:11:47.337123 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5b2813-fb4b-4525-a35e-0c80c92d0542-utilities\") pod \"community-operators-54kch\" (UID: \"cc5b2813-fb4b-4525-a35e-0c80c92d0542\") " pod="openshift-marketplace/community-operators-54kch" Jan 27 13:11:47 crc kubenswrapper[4745]: I0127 13:11:47.338071 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5b2813-fb4b-4525-a35e-0c80c92d0542-catalog-content\") pod \"community-operators-54kch\" (UID: \"cc5b2813-fb4b-4525-a35e-0c80c92d0542\") " pod="openshift-marketplace/community-operators-54kch" Jan 27 13:11:47 crc kubenswrapper[4745]: I0127 13:11:47.359320 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxv7j\" (UniqueName: \"kubernetes.io/projected/cc5b2813-fb4b-4525-a35e-0c80c92d0542-kube-api-access-lxv7j\") pod \"community-operators-54kch\" (UID: \"cc5b2813-fb4b-4525-a35e-0c80c92d0542\") " pod="openshift-marketplace/community-operators-54kch" Jan 27 13:11:47 crc kubenswrapper[4745]: I0127 13:11:47.600261 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54kch" Jan 27 13:11:48 crc kubenswrapper[4745]: I0127 13:11:48.061155 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-54kch"] Jan 27 13:11:48 crc kubenswrapper[4745]: I0127 13:11:48.421556 4745 generic.go:334] "Generic (PLEG): container finished" podID="cc5b2813-fb4b-4525-a35e-0c80c92d0542" containerID="5c36208d1a9ca3bfa176d16e53100dc002ab615de18ea39cebff8292900a5974" exitCode=0 Jan 27 13:11:48 crc kubenswrapper[4745]: I0127 13:11:48.421620 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54kch" event={"ID":"cc5b2813-fb4b-4525-a35e-0c80c92d0542","Type":"ContainerDied","Data":"5c36208d1a9ca3bfa176d16e53100dc002ab615de18ea39cebff8292900a5974"} Jan 27 13:11:48 crc kubenswrapper[4745]: I0127 13:11:48.421666 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54kch" event={"ID":"cc5b2813-fb4b-4525-a35e-0c80c92d0542","Type":"ContainerStarted","Data":"3632b0ed709a8e5bb32c5d4e83c601d9607d9ccbc0ea1bc96d8740cd01530791"} Jan 27 13:11:48 crc kubenswrapper[4745]: I0127 13:11:48.424072 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 13:11:50 crc kubenswrapper[4745]: I0127 13:11:50.439413 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54kch" event={"ID":"cc5b2813-fb4b-4525-a35e-0c80c92d0542","Type":"ContainerStarted","Data":"ffadd318703efed0ac29c8728afdcbae048ac0ed39fe403cfb83b4d7b8180798"} Jan 27 13:11:51 crc kubenswrapper[4745]: I0127 13:11:51.449235 4745 generic.go:334] "Generic (PLEG): container finished" podID="cc5b2813-fb4b-4525-a35e-0c80c92d0542" containerID="ffadd318703efed0ac29c8728afdcbae048ac0ed39fe403cfb83b4d7b8180798" exitCode=0 Jan 27 13:11:51 crc kubenswrapper[4745]: I0127 13:11:51.449275 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54kch" event={"ID":"cc5b2813-fb4b-4525-a35e-0c80c92d0542","Type":"ContainerDied","Data":"ffadd318703efed0ac29c8728afdcbae048ac0ed39fe403cfb83b4d7b8180798"} Jan 27 13:11:52 crc kubenswrapper[4745]: I0127 13:11:52.463517 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54kch" event={"ID":"cc5b2813-fb4b-4525-a35e-0c80c92d0542","Type":"ContainerStarted","Data":"0a2da27f839363966c413940f4a625ec738cd83a4df7075b5bc83f6edeb128d6"} Jan 27 13:11:52 crc kubenswrapper[4745]: I0127 13:11:52.485066 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-54kch" podStartSLOduration=2.057816817 podStartE2EDuration="5.48505035s" podCreationTimestamp="2026-01-27 13:11:47 +0000 UTC" firstStartedPulling="2026-01-27 13:11:48.423785177 +0000 UTC m=+3601.228695865" lastFinishedPulling="2026-01-27 13:11:51.8510187 +0000 UTC m=+3604.655929398" observedRunningTime="2026-01-27 13:11:52.480918332 +0000 UTC m=+3605.285829030" watchObservedRunningTime="2026-01-27 13:11:52.48505035 +0000 UTC m=+3605.289961038" Jan 27 13:11:57 crc kubenswrapper[4745]: I0127 13:11:57.601238 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-54kch" Jan 27 13:11:57 crc kubenswrapper[4745]: I0127 13:11:57.602032 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-54kch" Jan 27 13:11:57 crc kubenswrapper[4745]: I0127 13:11:57.648454 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-54kch" Jan 27 13:11:58 crc kubenswrapper[4745]: I0127 13:11:58.559696 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-54kch" Jan 27 13:11:58 crc kubenswrapper[4745]: I0127 13:11:58.612425 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-54kch"] Jan 27 13:12:00 crc kubenswrapper[4745]: I0127 13:12:00.517168 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-54kch" podUID="cc5b2813-fb4b-4525-a35e-0c80c92d0542" containerName="registry-server" containerID="cri-o://0a2da27f839363966c413940f4a625ec738cd83a4df7075b5bc83f6edeb128d6" gracePeriod=2 Jan 27 13:12:00 crc kubenswrapper[4745]: I0127 13:12:00.939627 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54kch" Jan 27 13:12:00 crc kubenswrapper[4745]: I0127 13:12:00.958770 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxv7j\" (UniqueName: \"kubernetes.io/projected/cc5b2813-fb4b-4525-a35e-0c80c92d0542-kube-api-access-lxv7j\") pod \"cc5b2813-fb4b-4525-a35e-0c80c92d0542\" (UID: \"cc5b2813-fb4b-4525-a35e-0c80c92d0542\") " Jan 27 13:12:00 crc kubenswrapper[4745]: I0127 13:12:00.958954 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5b2813-fb4b-4525-a35e-0c80c92d0542-utilities\") pod \"cc5b2813-fb4b-4525-a35e-0c80c92d0542\" (UID: \"cc5b2813-fb4b-4525-a35e-0c80c92d0542\") " Jan 27 13:12:00 crc kubenswrapper[4745]: I0127 13:12:00.958985 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5b2813-fb4b-4525-a35e-0c80c92d0542-catalog-content\") pod \"cc5b2813-fb4b-4525-a35e-0c80c92d0542\" (UID: \"cc5b2813-fb4b-4525-a35e-0c80c92d0542\") " Jan 27 13:12:00 crc kubenswrapper[4745]: I0127 13:12:00.962070 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc5b2813-fb4b-4525-a35e-0c80c92d0542-utilities" (OuterVolumeSpecName: "utilities") pod "cc5b2813-fb4b-4525-a35e-0c80c92d0542" (UID: "cc5b2813-fb4b-4525-a35e-0c80c92d0542"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:12:00 crc kubenswrapper[4745]: I0127 13:12:00.967721 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc5b2813-fb4b-4525-a35e-0c80c92d0542-kube-api-access-lxv7j" (OuterVolumeSpecName: "kube-api-access-lxv7j") pod "cc5b2813-fb4b-4525-a35e-0c80c92d0542" (UID: "cc5b2813-fb4b-4525-a35e-0c80c92d0542"). InnerVolumeSpecName "kube-api-access-lxv7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:12:01 crc kubenswrapper[4745]: I0127 13:12:01.060203 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5b2813-fb4b-4525-a35e-0c80c92d0542-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:12:01 crc kubenswrapper[4745]: I0127 13:12:01.060238 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxv7j\" (UniqueName: \"kubernetes.io/projected/cc5b2813-fb4b-4525-a35e-0c80c92d0542-kube-api-access-lxv7j\") on node \"crc\" DevicePath \"\"" Jan 27 13:12:01 crc kubenswrapper[4745]: I0127 13:12:01.528957 4745 generic.go:334] "Generic (PLEG): container finished" podID="cc5b2813-fb4b-4525-a35e-0c80c92d0542" containerID="0a2da27f839363966c413940f4a625ec738cd83a4df7075b5bc83f6edeb128d6" exitCode=0 Jan 27 13:12:01 crc kubenswrapper[4745]: I0127 13:12:01.529020 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54kch" event={"ID":"cc5b2813-fb4b-4525-a35e-0c80c92d0542","Type":"ContainerDied","Data":"0a2da27f839363966c413940f4a625ec738cd83a4df7075b5bc83f6edeb128d6"} Jan 27 13:12:01 crc kubenswrapper[4745]: I0127 13:12:01.529063 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54kch" event={"ID":"cc5b2813-fb4b-4525-a35e-0c80c92d0542","Type":"ContainerDied","Data":"3632b0ed709a8e5bb32c5d4e83c601d9607d9ccbc0ea1bc96d8740cd01530791"} Jan 27 13:12:01 crc kubenswrapper[4745]: I0127 13:12:01.529078 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54kch" Jan 27 13:12:01 crc kubenswrapper[4745]: I0127 13:12:01.529093 4745 scope.go:117] "RemoveContainer" containerID="0a2da27f839363966c413940f4a625ec738cd83a4df7075b5bc83f6edeb128d6" Jan 27 13:12:01 crc kubenswrapper[4745]: I0127 13:12:01.564207 4745 scope.go:117] "RemoveContainer" containerID="ffadd318703efed0ac29c8728afdcbae048ac0ed39fe403cfb83b4d7b8180798" Jan 27 13:12:01 crc kubenswrapper[4745]: I0127 13:12:01.593567 4745 scope.go:117] "RemoveContainer" containerID="5c36208d1a9ca3bfa176d16e53100dc002ab615de18ea39cebff8292900a5974" Jan 27 13:12:01 crc kubenswrapper[4745]: I0127 13:12:01.611439 4745 scope.go:117] "RemoveContainer" containerID="0a2da27f839363966c413940f4a625ec738cd83a4df7075b5bc83f6edeb128d6" Jan 27 13:12:01 crc kubenswrapper[4745]: E0127 13:12:01.612174 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a2da27f839363966c413940f4a625ec738cd83a4df7075b5bc83f6edeb128d6\": container with ID starting with 0a2da27f839363966c413940f4a625ec738cd83a4df7075b5bc83f6edeb128d6 not found: ID does not exist" containerID="0a2da27f839363966c413940f4a625ec738cd83a4df7075b5bc83f6edeb128d6" Jan 27 13:12:01 crc kubenswrapper[4745]: I0127 13:12:01.612219 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a2da27f839363966c413940f4a625ec738cd83a4df7075b5bc83f6edeb128d6"} err="failed to get container status \"0a2da27f839363966c413940f4a625ec738cd83a4df7075b5bc83f6edeb128d6\": rpc error: code = NotFound desc = could not find container \"0a2da27f839363966c413940f4a625ec738cd83a4df7075b5bc83f6edeb128d6\": container with ID starting with 0a2da27f839363966c413940f4a625ec738cd83a4df7075b5bc83f6edeb128d6 not found: ID does not exist" Jan 27 13:12:01 crc kubenswrapper[4745]: I0127 13:12:01.612250 4745 scope.go:117] "RemoveContainer" containerID="ffadd318703efed0ac29c8728afdcbae048ac0ed39fe403cfb83b4d7b8180798" Jan 27 13:12:01 crc kubenswrapper[4745]: E0127 13:12:01.612730 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffadd318703efed0ac29c8728afdcbae048ac0ed39fe403cfb83b4d7b8180798\": container with ID starting with ffadd318703efed0ac29c8728afdcbae048ac0ed39fe403cfb83b4d7b8180798 not found: ID does not exist" containerID="ffadd318703efed0ac29c8728afdcbae048ac0ed39fe403cfb83b4d7b8180798" Jan 27 13:12:01 crc kubenswrapper[4745]: I0127 13:12:01.612786 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffadd318703efed0ac29c8728afdcbae048ac0ed39fe403cfb83b4d7b8180798"} err="failed to get container status \"ffadd318703efed0ac29c8728afdcbae048ac0ed39fe403cfb83b4d7b8180798\": rpc error: code = NotFound desc = could not find container \"ffadd318703efed0ac29c8728afdcbae048ac0ed39fe403cfb83b4d7b8180798\": container with ID starting with ffadd318703efed0ac29c8728afdcbae048ac0ed39fe403cfb83b4d7b8180798 not found: ID does not exist" Jan 27 13:12:01 crc kubenswrapper[4745]: I0127 13:12:01.612838 4745 scope.go:117] "RemoveContainer" containerID="5c36208d1a9ca3bfa176d16e53100dc002ab615de18ea39cebff8292900a5974" Jan 27 13:12:01 crc kubenswrapper[4745]: E0127 13:12:01.613277 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c36208d1a9ca3bfa176d16e53100dc002ab615de18ea39cebff8292900a5974\": container with ID starting with 5c36208d1a9ca3bfa176d16e53100dc002ab615de18ea39cebff8292900a5974 not found: ID does not exist" containerID="5c36208d1a9ca3bfa176d16e53100dc002ab615de18ea39cebff8292900a5974" Jan 27 13:12:01 crc kubenswrapper[4745]: I0127 13:12:01.613307 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c36208d1a9ca3bfa176d16e53100dc002ab615de18ea39cebff8292900a5974"} err="failed to get container status \"5c36208d1a9ca3bfa176d16e53100dc002ab615de18ea39cebff8292900a5974\": rpc error: code = NotFound desc = could not find container \"5c36208d1a9ca3bfa176d16e53100dc002ab615de18ea39cebff8292900a5974\": container with ID starting with 5c36208d1a9ca3bfa176d16e53100dc002ab615de18ea39cebff8292900a5974 not found: ID does not exist" Jan 27 13:12:01 crc kubenswrapper[4745]: I0127 13:12:01.615399 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc5b2813-fb4b-4525-a35e-0c80c92d0542-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc5b2813-fb4b-4525-a35e-0c80c92d0542" (UID: "cc5b2813-fb4b-4525-a35e-0c80c92d0542"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:12:01 crc kubenswrapper[4745]: I0127 13:12:01.670137 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5b2813-fb4b-4525-a35e-0c80c92d0542-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:12:01 crc kubenswrapper[4745]: I0127 13:12:01.874709 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-54kch"] Jan 27 13:12:01 crc kubenswrapper[4745]: I0127 13:12:01.882604 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-54kch"] Jan 27 13:12:02 crc kubenswrapper[4745]: I0127 13:12:02.083395 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc5b2813-fb4b-4525-a35e-0c80c92d0542" path="/var/lib/kubelet/pods/cc5b2813-fb4b-4525-a35e-0c80c92d0542/volumes" Jan 27 13:12:05 crc kubenswrapper[4745]: I0127 13:12:05.967267 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:12:05 crc kubenswrapper[4745]: I0127 13:12:05.967362 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:12:16 crc kubenswrapper[4745]: I0127 13:12:16.458447 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jsjxh"] Jan 27 13:12:16 crc kubenswrapper[4745]: E0127 13:12:16.459309 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5b2813-fb4b-4525-a35e-0c80c92d0542" containerName="extract-utilities" Jan 27 13:12:16 crc kubenswrapper[4745]: I0127 13:12:16.459326 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5b2813-fb4b-4525-a35e-0c80c92d0542" containerName="extract-utilities" Jan 27 13:12:16 crc kubenswrapper[4745]: E0127 13:12:16.459340 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5b2813-fb4b-4525-a35e-0c80c92d0542" containerName="extract-content" Jan 27 13:12:16 crc kubenswrapper[4745]: I0127 13:12:16.459348 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5b2813-fb4b-4525-a35e-0c80c92d0542" containerName="extract-content" Jan 27 13:12:16 crc kubenswrapper[4745]: E0127 13:12:16.459365 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5b2813-fb4b-4525-a35e-0c80c92d0542" containerName="registry-server" Jan 27 13:12:16 crc kubenswrapper[4745]: I0127 13:12:16.459374 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5b2813-fb4b-4525-a35e-0c80c92d0542" containerName="registry-server" Jan 27 13:12:16 crc kubenswrapper[4745]: I0127 13:12:16.459570 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc5b2813-fb4b-4525-a35e-0c80c92d0542" containerName="registry-server" Jan 27 13:12:16 crc kubenswrapper[4745]: I0127 13:12:16.460941 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jsjxh" Jan 27 13:12:16 crc kubenswrapper[4745]: I0127 13:12:16.472258 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jsjxh"] Jan 27 13:12:16 crc kubenswrapper[4745]: I0127 13:12:16.575868 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4046c85e-105d-4197-bc33-01a814c8f08f-catalog-content\") pod \"redhat-marketplace-jsjxh\" (UID: \"4046c85e-105d-4197-bc33-01a814c8f08f\") " pod="openshift-marketplace/redhat-marketplace-jsjxh" Jan 27 13:12:16 crc kubenswrapper[4745]: I0127 13:12:16.576206 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4046c85e-105d-4197-bc33-01a814c8f08f-utilities\") pod \"redhat-marketplace-jsjxh\" (UID: \"4046c85e-105d-4197-bc33-01a814c8f08f\") " pod="openshift-marketplace/redhat-marketplace-jsjxh" Jan 27 13:12:16 crc kubenswrapper[4745]: I0127 13:12:16.576306 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kjh2\" (UniqueName: \"kubernetes.io/projected/4046c85e-105d-4197-bc33-01a814c8f08f-kube-api-access-9kjh2\") pod \"redhat-marketplace-jsjxh\" (UID: \"4046c85e-105d-4197-bc33-01a814c8f08f\") " pod="openshift-marketplace/redhat-marketplace-jsjxh" Jan 27 13:12:16 crc kubenswrapper[4745]: I0127 13:12:16.678199 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4046c85e-105d-4197-bc33-01a814c8f08f-catalog-content\") pod \"redhat-marketplace-jsjxh\" (UID: \"4046c85e-105d-4197-bc33-01a814c8f08f\") " pod="openshift-marketplace/redhat-marketplace-jsjxh" Jan 27 13:12:16 crc kubenswrapper[4745]: I0127 13:12:16.678511 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4046c85e-105d-4197-bc33-01a814c8f08f-utilities\") pod \"redhat-marketplace-jsjxh\" (UID: \"4046c85e-105d-4197-bc33-01a814c8f08f\") " pod="openshift-marketplace/redhat-marketplace-jsjxh" Jan 27 13:12:16 crc kubenswrapper[4745]: I0127 13:12:16.678608 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kjh2\" (UniqueName: \"kubernetes.io/projected/4046c85e-105d-4197-bc33-01a814c8f08f-kube-api-access-9kjh2\") pod \"redhat-marketplace-jsjxh\" (UID: \"4046c85e-105d-4197-bc33-01a814c8f08f\") " pod="openshift-marketplace/redhat-marketplace-jsjxh" Jan 27 13:12:16 crc kubenswrapper[4745]: I0127 13:12:16.678694 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4046c85e-105d-4197-bc33-01a814c8f08f-catalog-content\") pod \"redhat-marketplace-jsjxh\" (UID: \"4046c85e-105d-4197-bc33-01a814c8f08f\") " pod="openshift-marketplace/redhat-marketplace-jsjxh" Jan 27 13:12:16 crc kubenswrapper[4745]: I0127 13:12:16.678943 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4046c85e-105d-4197-bc33-01a814c8f08f-utilities\") pod \"redhat-marketplace-jsjxh\" (UID: \"4046c85e-105d-4197-bc33-01a814c8f08f\") " pod="openshift-marketplace/redhat-marketplace-jsjxh" Jan 27 13:12:16 crc kubenswrapper[4745]: I0127 13:12:16.696161 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kjh2\" (UniqueName: \"kubernetes.io/projected/4046c85e-105d-4197-bc33-01a814c8f08f-kube-api-access-9kjh2\") pod \"redhat-marketplace-jsjxh\" (UID: \"4046c85e-105d-4197-bc33-01a814c8f08f\") " pod="openshift-marketplace/redhat-marketplace-jsjxh" Jan 27 13:12:16 crc kubenswrapper[4745]: I0127 13:12:16.789409 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jsjxh" Jan 27 13:12:17 crc kubenswrapper[4745]: I0127 13:12:17.224639 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jsjxh"] Jan 27 13:12:17 crc kubenswrapper[4745]: I0127 13:12:17.654319 4745 generic.go:334] "Generic (PLEG): container finished" podID="4046c85e-105d-4197-bc33-01a814c8f08f" containerID="095eefe509bcc89c962d161bdb19e4abb41db7ffbc10c8ec0f1919b90eb29267" exitCode=0 Jan 27 13:12:17 crc kubenswrapper[4745]: I0127 13:12:17.654441 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsjxh" event={"ID":"4046c85e-105d-4197-bc33-01a814c8f08f","Type":"ContainerDied","Data":"095eefe509bcc89c962d161bdb19e4abb41db7ffbc10c8ec0f1919b90eb29267"} Jan 27 13:12:17 crc kubenswrapper[4745]: I0127 13:12:17.654729 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsjxh" event={"ID":"4046c85e-105d-4197-bc33-01a814c8f08f","Type":"ContainerStarted","Data":"7997d436697c3fe64e2a46e03e7b3bb47c61426609ccc3af6e00362e44c10455"} Jan 27 13:12:19 crc kubenswrapper[4745]: I0127 13:12:19.668457 4745 generic.go:334] "Generic (PLEG): container finished" podID="4046c85e-105d-4197-bc33-01a814c8f08f" containerID="c23ffed04b418e40756df4e04bf72ec3ad7c28e27439b51e79ce7f73d09d83da" exitCode=0 Jan 27 13:12:19 crc kubenswrapper[4745]: I0127 13:12:19.668549 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsjxh" event={"ID":"4046c85e-105d-4197-bc33-01a814c8f08f","Type":"ContainerDied","Data":"c23ffed04b418e40756df4e04bf72ec3ad7c28e27439b51e79ce7f73d09d83da"} Jan 27 13:12:20 crc kubenswrapper[4745]: I0127 13:12:20.677429 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsjxh" event={"ID":"4046c85e-105d-4197-bc33-01a814c8f08f","Type":"ContainerStarted","Data":"e542b68e5683a56cf1872685eec347dcf7a9433539be943e3bb29b979789df1d"} Jan 27 13:12:20 crc kubenswrapper[4745]: I0127 13:12:20.743456 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jsjxh" podStartSLOduration=2.284836719 podStartE2EDuration="4.743438618s" podCreationTimestamp="2026-01-27 13:12:16 +0000 UTC" firstStartedPulling="2026-01-27 13:12:17.656357047 +0000 UTC m=+3630.461267735" lastFinishedPulling="2026-01-27 13:12:20.114958946 +0000 UTC m=+3632.919869634" observedRunningTime="2026-01-27 13:12:20.740423872 +0000 UTC m=+3633.545334570" watchObservedRunningTime="2026-01-27 13:12:20.743438618 +0000 UTC m=+3633.548349306" Jan 27 13:12:26 crc kubenswrapper[4745]: I0127 13:12:26.790261 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jsjxh" Jan 27 13:12:26 crc kubenswrapper[4745]: I0127 13:12:26.790500 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jsjxh" Jan 27 13:12:26 crc kubenswrapper[4745]: I0127 13:12:26.833910 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jsjxh" Jan 27 13:12:27 crc kubenswrapper[4745]: I0127 13:12:27.789222 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jsjxh" Jan 27 13:12:27 crc kubenswrapper[4745]: I0127 13:12:27.851976 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jsjxh"] Jan 27 13:12:29 crc kubenswrapper[4745]: I0127 13:12:29.753942 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jsjxh" podUID="4046c85e-105d-4197-bc33-01a814c8f08f" containerName="registry-server" containerID="cri-o://e542b68e5683a56cf1872685eec347dcf7a9433539be943e3bb29b979789df1d" gracePeriod=2 Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.226030 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jsjxh" Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.306982 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4046c85e-105d-4197-bc33-01a814c8f08f-catalog-content\") pod \"4046c85e-105d-4197-bc33-01a814c8f08f\" (UID: \"4046c85e-105d-4197-bc33-01a814c8f08f\") " Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.307027 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kjh2\" (UniqueName: \"kubernetes.io/projected/4046c85e-105d-4197-bc33-01a814c8f08f-kube-api-access-9kjh2\") pod \"4046c85e-105d-4197-bc33-01a814c8f08f\" (UID: \"4046c85e-105d-4197-bc33-01a814c8f08f\") " Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.307163 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4046c85e-105d-4197-bc33-01a814c8f08f-utilities\") pod \"4046c85e-105d-4197-bc33-01a814c8f08f\" (UID: \"4046c85e-105d-4197-bc33-01a814c8f08f\") " Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.308320 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4046c85e-105d-4197-bc33-01a814c8f08f-utilities" (OuterVolumeSpecName: "utilities") pod "4046c85e-105d-4197-bc33-01a814c8f08f" (UID: "4046c85e-105d-4197-bc33-01a814c8f08f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.314166 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4046c85e-105d-4197-bc33-01a814c8f08f-kube-api-access-9kjh2" (OuterVolumeSpecName: "kube-api-access-9kjh2") pod "4046c85e-105d-4197-bc33-01a814c8f08f" (UID: "4046c85e-105d-4197-bc33-01a814c8f08f"). InnerVolumeSpecName "kube-api-access-9kjh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.329066 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4046c85e-105d-4197-bc33-01a814c8f08f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4046c85e-105d-4197-bc33-01a814c8f08f" (UID: "4046c85e-105d-4197-bc33-01a814c8f08f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.409850 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4046c85e-105d-4197-bc33-01a814c8f08f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.409921 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kjh2\" (UniqueName: \"kubernetes.io/projected/4046c85e-105d-4197-bc33-01a814c8f08f-kube-api-access-9kjh2\") on node \"crc\" DevicePath \"\"" Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.409941 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4046c85e-105d-4197-bc33-01a814c8f08f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.766874 4745 generic.go:334] "Generic (PLEG): container finished" podID="4046c85e-105d-4197-bc33-01a814c8f08f" containerID="e542b68e5683a56cf1872685eec347dcf7a9433539be943e3bb29b979789df1d" exitCode=0 Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.766929 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsjxh" event={"ID":"4046c85e-105d-4197-bc33-01a814c8f08f","Type":"ContainerDied","Data":"e542b68e5683a56cf1872685eec347dcf7a9433539be943e3bb29b979789df1d"} Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.766994 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jsjxh" Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.767896 4745 scope.go:117] "RemoveContainer" containerID="e542b68e5683a56cf1872685eec347dcf7a9433539be943e3bb29b979789df1d" Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.767876 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsjxh" event={"ID":"4046c85e-105d-4197-bc33-01a814c8f08f","Type":"ContainerDied","Data":"7997d436697c3fe64e2a46e03e7b3bb47c61426609ccc3af6e00362e44c10455"} Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.789137 4745 scope.go:117] "RemoveContainer" containerID="c23ffed04b418e40756df4e04bf72ec3ad7c28e27439b51e79ce7f73d09d83da" Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.804128 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jsjxh"] Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.811708 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jsjxh"] Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.831040 4745 scope.go:117] "RemoveContainer" containerID="095eefe509bcc89c962d161bdb19e4abb41db7ffbc10c8ec0f1919b90eb29267" Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.851881 4745 scope.go:117] "RemoveContainer" containerID="e542b68e5683a56cf1872685eec347dcf7a9433539be943e3bb29b979789df1d" Jan 27 13:12:30 crc kubenswrapper[4745]: E0127 13:12:30.852350 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e542b68e5683a56cf1872685eec347dcf7a9433539be943e3bb29b979789df1d\": container with ID starting with e542b68e5683a56cf1872685eec347dcf7a9433539be943e3bb29b979789df1d not found: ID does not exist" containerID="e542b68e5683a56cf1872685eec347dcf7a9433539be943e3bb29b979789df1d" Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.852401 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e542b68e5683a56cf1872685eec347dcf7a9433539be943e3bb29b979789df1d"} err="failed to get container status \"e542b68e5683a56cf1872685eec347dcf7a9433539be943e3bb29b979789df1d\": rpc error: code = NotFound desc = could not find container \"e542b68e5683a56cf1872685eec347dcf7a9433539be943e3bb29b979789df1d\": container with ID starting with e542b68e5683a56cf1872685eec347dcf7a9433539be943e3bb29b979789df1d not found: ID does not exist" Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.852436 4745 scope.go:117] "RemoveContainer" containerID="c23ffed04b418e40756df4e04bf72ec3ad7c28e27439b51e79ce7f73d09d83da" Jan 27 13:12:30 crc kubenswrapper[4745]: E0127 13:12:30.852778 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c23ffed04b418e40756df4e04bf72ec3ad7c28e27439b51e79ce7f73d09d83da\": container with ID starting with c23ffed04b418e40756df4e04bf72ec3ad7c28e27439b51e79ce7f73d09d83da not found: ID does not exist" containerID="c23ffed04b418e40756df4e04bf72ec3ad7c28e27439b51e79ce7f73d09d83da" Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.852871 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c23ffed04b418e40756df4e04bf72ec3ad7c28e27439b51e79ce7f73d09d83da"} err="failed to get container status \"c23ffed04b418e40756df4e04bf72ec3ad7c28e27439b51e79ce7f73d09d83da\": rpc error: code = NotFound desc = could not find container \"c23ffed04b418e40756df4e04bf72ec3ad7c28e27439b51e79ce7f73d09d83da\": container with ID starting with c23ffed04b418e40756df4e04bf72ec3ad7c28e27439b51e79ce7f73d09d83da not found: ID does not exist" Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.852905 4745 scope.go:117] "RemoveContainer" containerID="095eefe509bcc89c962d161bdb19e4abb41db7ffbc10c8ec0f1919b90eb29267" Jan 27 13:12:30 crc kubenswrapper[4745]: E0127 13:12:30.853234 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"095eefe509bcc89c962d161bdb19e4abb41db7ffbc10c8ec0f1919b90eb29267\": container with ID starting with 095eefe509bcc89c962d161bdb19e4abb41db7ffbc10c8ec0f1919b90eb29267 not found: ID does not exist" containerID="095eefe509bcc89c962d161bdb19e4abb41db7ffbc10c8ec0f1919b90eb29267" Jan 27 13:12:30 crc kubenswrapper[4745]: I0127 13:12:30.853261 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"095eefe509bcc89c962d161bdb19e4abb41db7ffbc10c8ec0f1919b90eb29267"} err="failed to get container status \"095eefe509bcc89c962d161bdb19e4abb41db7ffbc10c8ec0f1919b90eb29267\": rpc error: code = NotFound desc = could not find container \"095eefe509bcc89c962d161bdb19e4abb41db7ffbc10c8ec0f1919b90eb29267\": container with ID starting with 095eefe509bcc89c962d161bdb19e4abb41db7ffbc10c8ec0f1919b90eb29267 not found: ID does not exist" Jan 27 13:12:32 crc kubenswrapper[4745]: I0127 13:12:32.083068 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4046c85e-105d-4197-bc33-01a814c8f08f" path="/var/lib/kubelet/pods/4046c85e-105d-4197-bc33-01a814c8f08f/volumes" Jan 27 13:12:35 crc kubenswrapper[4745]: I0127 13:12:35.968382 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:12:35 crc kubenswrapper[4745]: I0127 13:12:35.968717 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:12:38 crc kubenswrapper[4745]: I0127 13:12:38.162393 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nvhmq"] Jan 27 13:12:38 crc kubenswrapper[4745]: E0127 13:12:38.163065 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4046c85e-105d-4197-bc33-01a814c8f08f" containerName="extract-content" Jan 27 13:12:38 crc kubenswrapper[4745]: I0127 13:12:38.163083 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="4046c85e-105d-4197-bc33-01a814c8f08f" containerName="extract-content" Jan 27 13:12:38 crc kubenswrapper[4745]: E0127 13:12:38.163105 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4046c85e-105d-4197-bc33-01a814c8f08f" containerName="registry-server" Jan 27 13:12:38 crc kubenswrapper[4745]: I0127 13:12:38.163114 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="4046c85e-105d-4197-bc33-01a814c8f08f" containerName="registry-server" Jan 27 13:12:38 crc kubenswrapper[4745]: E0127 13:12:38.163126 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4046c85e-105d-4197-bc33-01a814c8f08f" containerName="extract-utilities" Jan 27 13:12:38 crc kubenswrapper[4745]: I0127 13:12:38.163132 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="4046c85e-105d-4197-bc33-01a814c8f08f" containerName="extract-utilities" Jan 27 13:12:38 crc kubenswrapper[4745]: I0127 13:12:38.163280 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="4046c85e-105d-4197-bc33-01a814c8f08f" containerName="registry-server" Jan 27 13:12:38 crc kubenswrapper[4745]: I0127 13:12:38.164604 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvhmq" Jan 27 13:12:38 crc kubenswrapper[4745]: I0127 13:12:38.172126 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nvhmq"] Jan 27 13:12:38 crc kubenswrapper[4745]: I0127 13:12:38.230599 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6201ed46-afc3-4a95-9f79-c3d66161254a-catalog-content\") pod \"redhat-operators-nvhmq\" (UID: \"6201ed46-afc3-4a95-9f79-c3d66161254a\") " pod="openshift-marketplace/redhat-operators-nvhmq" Jan 27 13:12:38 crc kubenswrapper[4745]: I0127 13:12:38.230686 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6201ed46-afc3-4a95-9f79-c3d66161254a-utilities\") pod \"redhat-operators-nvhmq\" (UID: \"6201ed46-afc3-4a95-9f79-c3d66161254a\") " pod="openshift-marketplace/redhat-operators-nvhmq" Jan 27 13:12:38 crc kubenswrapper[4745]: I0127 13:12:38.230733 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb7dp\" (UniqueName: \"kubernetes.io/projected/6201ed46-afc3-4a95-9f79-c3d66161254a-kube-api-access-cb7dp\") pod \"redhat-operators-nvhmq\" (UID: \"6201ed46-afc3-4a95-9f79-c3d66161254a\") " pod="openshift-marketplace/redhat-operators-nvhmq" Jan 27 13:12:38 crc kubenswrapper[4745]: I0127 13:12:38.331696 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6201ed46-afc3-4a95-9f79-c3d66161254a-catalog-content\") pod \"redhat-operators-nvhmq\" (UID: \"6201ed46-afc3-4a95-9f79-c3d66161254a\") " pod="openshift-marketplace/redhat-operators-nvhmq" Jan 27 13:12:38 crc kubenswrapper[4745]: I0127 13:12:38.331761 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6201ed46-afc3-4a95-9f79-c3d66161254a-utilities\") pod \"redhat-operators-nvhmq\" (UID: \"6201ed46-afc3-4a95-9f79-c3d66161254a\") " pod="openshift-marketplace/redhat-operators-nvhmq" Jan 27 13:12:38 crc kubenswrapper[4745]: I0127 13:12:38.331802 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb7dp\" (UniqueName: \"kubernetes.io/projected/6201ed46-afc3-4a95-9f79-c3d66161254a-kube-api-access-cb7dp\") pod \"redhat-operators-nvhmq\" (UID: \"6201ed46-afc3-4a95-9f79-c3d66161254a\") " pod="openshift-marketplace/redhat-operators-nvhmq" Jan 27 13:12:38 crc kubenswrapper[4745]: I0127 13:12:38.332209 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6201ed46-afc3-4a95-9f79-c3d66161254a-catalog-content\") pod \"redhat-operators-nvhmq\" (UID: \"6201ed46-afc3-4a95-9f79-c3d66161254a\") " pod="openshift-marketplace/redhat-operators-nvhmq" Jan 27 13:12:38 crc kubenswrapper[4745]: I0127 13:12:38.332358 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6201ed46-afc3-4a95-9f79-c3d66161254a-utilities\") pod \"redhat-operators-nvhmq\" (UID: \"6201ed46-afc3-4a95-9f79-c3d66161254a\") " pod="openshift-marketplace/redhat-operators-nvhmq" Jan 27 13:12:38 crc kubenswrapper[4745]: I0127 13:12:38.354769 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb7dp\" (UniqueName: \"kubernetes.io/projected/6201ed46-afc3-4a95-9f79-c3d66161254a-kube-api-access-cb7dp\") pod \"redhat-operators-nvhmq\" (UID: \"6201ed46-afc3-4a95-9f79-c3d66161254a\") " pod="openshift-marketplace/redhat-operators-nvhmq" Jan 27 13:12:38 crc kubenswrapper[4745]: I0127 13:12:38.483351 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvhmq" Jan 27 13:12:40 crc kubenswrapper[4745]: I0127 13:12:40.175699 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nvhmq"] Jan 27 13:12:40 crc kubenswrapper[4745]: W0127 13:12:40.185065 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6201ed46_afc3_4a95_9f79_c3d66161254a.slice/crio-57fd989268de5e8677dbc34f37d5a8e2799c85bd4a8945975aa92ea7b094abaa WatchSource:0}: Error finding container 57fd989268de5e8677dbc34f37d5a8e2799c85bd4a8945975aa92ea7b094abaa: Status 404 returned error can't find the container with id 57fd989268de5e8677dbc34f37d5a8e2799c85bd4a8945975aa92ea7b094abaa Jan 27 13:12:40 crc kubenswrapper[4745]: I0127 13:12:40.845299 4745 generic.go:334] "Generic (PLEG): container finished" podID="6201ed46-afc3-4a95-9f79-c3d66161254a" containerID="befce33591d3e6dadd90752e9088f24ae63a39dd78039d61a687d9d8f82e04b3" exitCode=0 Jan 27 13:12:40 crc kubenswrapper[4745]: I0127 13:12:40.845418 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvhmq" event={"ID":"6201ed46-afc3-4a95-9f79-c3d66161254a","Type":"ContainerDied","Data":"befce33591d3e6dadd90752e9088f24ae63a39dd78039d61a687d9d8f82e04b3"} Jan 27 13:12:40 crc kubenswrapper[4745]: I0127 13:12:40.845695 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvhmq" event={"ID":"6201ed46-afc3-4a95-9f79-c3d66161254a","Type":"ContainerStarted","Data":"57fd989268de5e8677dbc34f37d5a8e2799c85bd4a8945975aa92ea7b094abaa"} Jan 27 13:12:42 crc kubenswrapper[4745]: I0127 13:12:42.543522 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k4dbb"] Jan 27 13:12:42 crc kubenswrapper[4745]: I0127 13:12:42.545670 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k4dbb" Jan 27 13:12:42 crc kubenswrapper[4745]: I0127 13:12:42.558294 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k4dbb"] Jan 27 13:12:42 crc kubenswrapper[4745]: I0127 13:12:42.619656 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faa1aaa7-bc00-4be9-8b36-6834b6a24b79-utilities\") pod \"certified-operators-k4dbb\" (UID: \"faa1aaa7-bc00-4be9-8b36-6834b6a24b79\") " pod="openshift-marketplace/certified-operators-k4dbb" Jan 27 13:12:42 crc kubenswrapper[4745]: I0127 13:12:42.619962 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9w92\" (UniqueName: \"kubernetes.io/projected/faa1aaa7-bc00-4be9-8b36-6834b6a24b79-kube-api-access-d9w92\") pod \"certified-operators-k4dbb\" (UID: \"faa1aaa7-bc00-4be9-8b36-6834b6a24b79\") " pod="openshift-marketplace/certified-operators-k4dbb" Jan 27 13:12:42 crc kubenswrapper[4745]: I0127 13:12:42.620109 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faa1aaa7-bc00-4be9-8b36-6834b6a24b79-catalog-content\") pod \"certified-operators-k4dbb\" (UID: \"faa1aaa7-bc00-4be9-8b36-6834b6a24b79\") " pod="openshift-marketplace/certified-operators-k4dbb" Jan 27 13:12:42 crc kubenswrapper[4745]: I0127 13:12:42.721309 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faa1aaa7-bc00-4be9-8b36-6834b6a24b79-catalog-content\") pod \"certified-operators-k4dbb\" (UID: \"faa1aaa7-bc00-4be9-8b36-6834b6a24b79\") " pod="openshift-marketplace/certified-operators-k4dbb" Jan 27 13:12:42 crc kubenswrapper[4745]: I0127 13:12:42.721655 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faa1aaa7-bc00-4be9-8b36-6834b6a24b79-utilities\") pod \"certified-operators-k4dbb\" (UID: \"faa1aaa7-bc00-4be9-8b36-6834b6a24b79\") " pod="openshift-marketplace/certified-operators-k4dbb" Jan 27 13:12:42 crc kubenswrapper[4745]: I0127 13:12:42.721862 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9w92\" (UniqueName: \"kubernetes.io/projected/faa1aaa7-bc00-4be9-8b36-6834b6a24b79-kube-api-access-d9w92\") pod \"certified-operators-k4dbb\" (UID: \"faa1aaa7-bc00-4be9-8b36-6834b6a24b79\") " pod="openshift-marketplace/certified-operators-k4dbb" Jan 27 13:12:42 crc kubenswrapper[4745]: I0127 13:12:42.722205 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faa1aaa7-bc00-4be9-8b36-6834b6a24b79-catalog-content\") pod \"certified-operators-k4dbb\" (UID: \"faa1aaa7-bc00-4be9-8b36-6834b6a24b79\") " pod="openshift-marketplace/certified-operators-k4dbb" Jan 27 13:12:42 crc kubenswrapper[4745]: I0127 13:12:42.722215 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faa1aaa7-bc00-4be9-8b36-6834b6a24b79-utilities\") pod \"certified-operators-k4dbb\" (UID: \"faa1aaa7-bc00-4be9-8b36-6834b6a24b79\") " pod="openshift-marketplace/certified-operators-k4dbb" Jan 27 13:12:42 crc kubenswrapper[4745]: I0127 13:12:42.742739 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9w92\" (UniqueName: \"kubernetes.io/projected/faa1aaa7-bc00-4be9-8b36-6834b6a24b79-kube-api-access-d9w92\") pod \"certified-operators-k4dbb\" (UID: \"faa1aaa7-bc00-4be9-8b36-6834b6a24b79\") " pod="openshift-marketplace/certified-operators-k4dbb" Jan 27 13:12:42 crc kubenswrapper[4745]: I0127 13:12:42.862266 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvhmq" event={"ID":"6201ed46-afc3-4a95-9f79-c3d66161254a","Type":"ContainerStarted","Data":"9487684eeac9bfd0d4ab13495f76876ec9f15cdd5e6431c1ef823ecd4b2e1ce5"} Jan 27 13:12:42 crc kubenswrapper[4745]: I0127 13:12:42.877700 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k4dbb" Jan 27 13:12:43 crc kubenswrapper[4745]: I0127 13:12:43.387856 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k4dbb"] Jan 27 13:12:43 crc kubenswrapper[4745]: I0127 13:12:43.871147 4745 generic.go:334] "Generic (PLEG): container finished" podID="faa1aaa7-bc00-4be9-8b36-6834b6a24b79" containerID="fb1fbe6176dd00f367ee5b21dc005d760cdfee466eb9967d327f870f900ac5dc" exitCode=0 Jan 27 13:12:43 crc kubenswrapper[4745]: I0127 13:12:43.871193 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4dbb" event={"ID":"faa1aaa7-bc00-4be9-8b36-6834b6a24b79","Type":"ContainerDied","Data":"fb1fbe6176dd00f367ee5b21dc005d760cdfee466eb9967d327f870f900ac5dc"} Jan 27 13:12:43 crc kubenswrapper[4745]: I0127 13:12:43.871241 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4dbb" event={"ID":"faa1aaa7-bc00-4be9-8b36-6834b6a24b79","Type":"ContainerStarted","Data":"5366d895e4cb800f39de9d115e7667a0c58c24a9a5a5105327d8f63f9d9fd47b"} Jan 27 13:12:43 crc kubenswrapper[4745]: I0127 13:12:43.874295 4745 generic.go:334] "Generic (PLEG): container finished" podID="6201ed46-afc3-4a95-9f79-c3d66161254a" containerID="9487684eeac9bfd0d4ab13495f76876ec9f15cdd5e6431c1ef823ecd4b2e1ce5" exitCode=0 Jan 27 13:12:43 crc kubenswrapper[4745]: I0127 13:12:43.874357 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvhmq" event={"ID":"6201ed46-afc3-4a95-9f79-c3d66161254a","Type":"ContainerDied","Data":"9487684eeac9bfd0d4ab13495f76876ec9f15cdd5e6431c1ef823ecd4b2e1ce5"} Jan 27 13:12:45 crc kubenswrapper[4745]: I0127 13:12:45.893347 4745 generic.go:334] "Generic (PLEG): container finished" podID="faa1aaa7-bc00-4be9-8b36-6834b6a24b79" containerID="af08bc95405c3b9721a9334eeb50cfb5d9fdfb8ad8877754ffa52075d4ce4f55" exitCode=0 Jan 27 13:12:45 crc kubenswrapper[4745]: I0127 13:12:45.893432 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4dbb" event={"ID":"faa1aaa7-bc00-4be9-8b36-6834b6a24b79","Type":"ContainerDied","Data":"af08bc95405c3b9721a9334eeb50cfb5d9fdfb8ad8877754ffa52075d4ce4f55"} Jan 27 13:12:45 crc kubenswrapper[4745]: I0127 13:12:45.899080 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvhmq" event={"ID":"6201ed46-afc3-4a95-9f79-c3d66161254a","Type":"ContainerStarted","Data":"9097e56a48c840100c1a2769731eb24260004fc648e29f510e100b6bd28301ef"} Jan 27 13:12:47 crc kubenswrapper[4745]: I0127 13:12:47.915545 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4dbb" event={"ID":"faa1aaa7-bc00-4be9-8b36-6834b6a24b79","Type":"ContainerStarted","Data":"4f00183566a17ae8da6b7f84b223fad78bd336af8474edbabebe28b02b6bf8b6"} Jan 27 13:12:47 crc kubenswrapper[4745]: I0127 13:12:47.935897 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nvhmq" podStartSLOduration=5.857069849 podStartE2EDuration="9.935879571s" podCreationTimestamp="2026-01-27 13:12:38 +0000 UTC" firstStartedPulling="2026-01-27 13:12:40.846894774 +0000 UTC m=+3653.651805462" lastFinishedPulling="2026-01-27 13:12:44.925704496 +0000 UTC m=+3657.730615184" observedRunningTime="2026-01-27 13:12:45.946986842 +0000 UTC m=+3658.751897530" watchObservedRunningTime="2026-01-27 13:12:47.935879571 +0000 UTC m=+3660.740790259" Jan 27 13:12:47 crc kubenswrapper[4745]: I0127 13:12:47.938338 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k4dbb" podStartSLOduration=2.701789334 podStartE2EDuration="5.938330001s" podCreationTimestamp="2026-01-27 13:12:42 +0000 UTC" firstStartedPulling="2026-01-27 13:12:43.873619802 +0000 UTC m=+3656.678530490" lastFinishedPulling="2026-01-27 13:12:47.110160469 +0000 UTC m=+3659.915071157" observedRunningTime="2026-01-27 13:12:47.933162883 +0000 UTC m=+3660.738073591" watchObservedRunningTime="2026-01-27 13:12:47.938330001 +0000 UTC m=+3660.743240689" Jan 27 13:12:48 crc kubenswrapper[4745]: I0127 13:12:48.484106 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nvhmq" Jan 27 13:12:48 crc kubenswrapper[4745]: I0127 13:12:48.484160 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nvhmq" Jan 27 13:12:49 crc kubenswrapper[4745]: I0127 13:12:49.527651 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nvhmq" podUID="6201ed46-afc3-4a95-9f79-c3d66161254a" containerName="registry-server" probeResult="failure" output=< Jan 27 13:12:49 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 27 13:12:49 crc kubenswrapper[4745]: > Jan 27 13:12:52 crc kubenswrapper[4745]: I0127 13:12:52.878088 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k4dbb" Jan 27 13:12:52 crc kubenswrapper[4745]: I0127 13:12:52.878456 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k4dbb" Jan 27 13:12:52 crc kubenswrapper[4745]: I0127 13:12:52.939772 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k4dbb" Jan 27 13:12:52 crc kubenswrapper[4745]: I0127 13:12:52.993761 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k4dbb" Jan 27 13:12:53 crc kubenswrapper[4745]: I0127 13:12:53.175305 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k4dbb"] Jan 27 13:12:54 crc kubenswrapper[4745]: I0127 13:12:54.966916 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k4dbb" podUID="faa1aaa7-bc00-4be9-8b36-6834b6a24b79" containerName="registry-server" containerID="cri-o://4f00183566a17ae8da6b7f84b223fad78bd336af8474edbabebe28b02b6bf8b6" gracePeriod=2 Jan 27 13:12:55 crc kubenswrapper[4745]: I0127 13:12:55.868331 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k4dbb" Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.009618 4745 generic.go:334] "Generic (PLEG): container finished" podID="faa1aaa7-bc00-4be9-8b36-6834b6a24b79" containerID="4f00183566a17ae8da6b7f84b223fad78bd336af8474edbabebe28b02b6bf8b6" exitCode=0 Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.009671 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4dbb" event={"ID":"faa1aaa7-bc00-4be9-8b36-6834b6a24b79","Type":"ContainerDied","Data":"4f00183566a17ae8da6b7f84b223fad78bd336af8474edbabebe28b02b6bf8b6"} Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.009697 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k4dbb" Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.009720 4745 scope.go:117] "RemoveContainer" containerID="4f00183566a17ae8da6b7f84b223fad78bd336af8474edbabebe28b02b6bf8b6" Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.009707 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4dbb" event={"ID":"faa1aaa7-bc00-4be9-8b36-6834b6a24b79","Type":"ContainerDied","Data":"5366d895e4cb800f39de9d115e7667a0c58c24a9a5a5105327d8f63f9d9fd47b"} Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.026922 4745 scope.go:117] "RemoveContainer" containerID="af08bc95405c3b9721a9334eeb50cfb5d9fdfb8ad8877754ffa52075d4ce4f55" Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.043024 4745 scope.go:117] "RemoveContainer" containerID="fb1fbe6176dd00f367ee5b21dc005d760cdfee466eb9967d327f870f900ac5dc" Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.059139 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faa1aaa7-bc00-4be9-8b36-6834b6a24b79-catalog-content\") pod \"faa1aaa7-bc00-4be9-8b36-6834b6a24b79\" (UID: \"faa1aaa7-bc00-4be9-8b36-6834b6a24b79\") " Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.059242 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faa1aaa7-bc00-4be9-8b36-6834b6a24b79-utilities\") pod \"faa1aaa7-bc00-4be9-8b36-6834b6a24b79\" (UID: \"faa1aaa7-bc00-4be9-8b36-6834b6a24b79\") " Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.059298 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9w92\" (UniqueName: \"kubernetes.io/projected/faa1aaa7-bc00-4be9-8b36-6834b6a24b79-kube-api-access-d9w92\") pod \"faa1aaa7-bc00-4be9-8b36-6834b6a24b79\" (UID: \"faa1aaa7-bc00-4be9-8b36-6834b6a24b79\") " Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.060372 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faa1aaa7-bc00-4be9-8b36-6834b6a24b79-utilities" (OuterVolumeSpecName: "utilities") pod "faa1aaa7-bc00-4be9-8b36-6834b6a24b79" (UID: "faa1aaa7-bc00-4be9-8b36-6834b6a24b79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.066017 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa1aaa7-bc00-4be9-8b36-6834b6a24b79-kube-api-access-d9w92" (OuterVolumeSpecName: "kube-api-access-d9w92") pod "faa1aaa7-bc00-4be9-8b36-6834b6a24b79" (UID: "faa1aaa7-bc00-4be9-8b36-6834b6a24b79"). InnerVolumeSpecName "kube-api-access-d9w92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.072825 4745 scope.go:117] "RemoveContainer" containerID="4f00183566a17ae8da6b7f84b223fad78bd336af8474edbabebe28b02b6bf8b6" Jan 27 13:12:56 crc kubenswrapper[4745]: E0127 13:12:56.074040 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f00183566a17ae8da6b7f84b223fad78bd336af8474edbabebe28b02b6bf8b6\": container with ID starting with 4f00183566a17ae8da6b7f84b223fad78bd336af8474edbabebe28b02b6bf8b6 not found: ID does not exist" containerID="4f00183566a17ae8da6b7f84b223fad78bd336af8474edbabebe28b02b6bf8b6" Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.074084 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f00183566a17ae8da6b7f84b223fad78bd336af8474edbabebe28b02b6bf8b6"} err="failed to get container status \"4f00183566a17ae8da6b7f84b223fad78bd336af8474edbabebe28b02b6bf8b6\": rpc error: code = NotFound desc = could not find container \"4f00183566a17ae8da6b7f84b223fad78bd336af8474edbabebe28b02b6bf8b6\": container with ID starting with 4f00183566a17ae8da6b7f84b223fad78bd336af8474edbabebe28b02b6bf8b6 not found: ID does not exist" Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.074111 4745 scope.go:117] "RemoveContainer" containerID="af08bc95405c3b9721a9334eeb50cfb5d9fdfb8ad8877754ffa52075d4ce4f55" Jan 27 13:12:56 crc kubenswrapper[4745]: E0127 13:12:56.074427 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af08bc95405c3b9721a9334eeb50cfb5d9fdfb8ad8877754ffa52075d4ce4f55\": container with ID starting with af08bc95405c3b9721a9334eeb50cfb5d9fdfb8ad8877754ffa52075d4ce4f55 not found: ID does not exist" containerID="af08bc95405c3b9721a9334eeb50cfb5d9fdfb8ad8877754ffa52075d4ce4f55" Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.074457 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af08bc95405c3b9721a9334eeb50cfb5d9fdfb8ad8877754ffa52075d4ce4f55"} err="failed to get container status \"af08bc95405c3b9721a9334eeb50cfb5d9fdfb8ad8877754ffa52075d4ce4f55\": rpc error: code = NotFound desc = could not find container \"af08bc95405c3b9721a9334eeb50cfb5d9fdfb8ad8877754ffa52075d4ce4f55\": container with ID starting with af08bc95405c3b9721a9334eeb50cfb5d9fdfb8ad8877754ffa52075d4ce4f55 not found: ID does not exist" Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.074475 4745 scope.go:117] "RemoveContainer" containerID="fb1fbe6176dd00f367ee5b21dc005d760cdfee466eb9967d327f870f900ac5dc" Jan 27 13:12:56 crc kubenswrapper[4745]: E0127 13:12:56.074714 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb1fbe6176dd00f367ee5b21dc005d760cdfee466eb9967d327f870f900ac5dc\": container with ID starting with fb1fbe6176dd00f367ee5b21dc005d760cdfee466eb9967d327f870f900ac5dc not found: ID does not exist" containerID="fb1fbe6176dd00f367ee5b21dc005d760cdfee466eb9967d327f870f900ac5dc" Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.074732 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb1fbe6176dd00f367ee5b21dc005d760cdfee466eb9967d327f870f900ac5dc"} err="failed to get container status \"fb1fbe6176dd00f367ee5b21dc005d760cdfee466eb9967d327f870f900ac5dc\": rpc error: code = NotFound desc = could not find container \"fb1fbe6176dd00f367ee5b21dc005d760cdfee466eb9967d327f870f900ac5dc\": container with ID starting with fb1fbe6176dd00f367ee5b21dc005d760cdfee466eb9967d327f870f900ac5dc not found: ID does not exist" Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.109697 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faa1aaa7-bc00-4be9-8b36-6834b6a24b79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "faa1aaa7-bc00-4be9-8b36-6834b6a24b79" (UID: "faa1aaa7-bc00-4be9-8b36-6834b6a24b79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.161450 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faa1aaa7-bc00-4be9-8b36-6834b6a24b79-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.161518 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faa1aaa7-bc00-4be9-8b36-6834b6a24b79-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.161550 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9w92\" (UniqueName: \"kubernetes.io/projected/faa1aaa7-bc00-4be9-8b36-6834b6a24b79-kube-api-access-d9w92\") on node \"crc\" DevicePath \"\"" Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.346211 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k4dbb"] Jan 27 13:12:56 crc kubenswrapper[4745]: I0127 13:12:56.354032 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k4dbb"] Jan 27 13:12:58 crc kubenswrapper[4745]: I0127 13:12:58.087228 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faa1aaa7-bc00-4be9-8b36-6834b6a24b79" path="/var/lib/kubelet/pods/faa1aaa7-bc00-4be9-8b36-6834b6a24b79/volumes" Jan 27 13:12:58 crc kubenswrapper[4745]: I0127 13:12:58.522673 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nvhmq" Jan 27 13:12:58 crc kubenswrapper[4745]: I0127 13:12:58.569430 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nvhmq" Jan 27 13:12:59 crc kubenswrapper[4745]: I0127 13:12:59.571934 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nvhmq"] Jan 27 13:13:00 crc kubenswrapper[4745]: I0127 13:13:00.052555 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nvhmq" podUID="6201ed46-afc3-4a95-9f79-c3d66161254a" containerName="registry-server" containerID="cri-o://9097e56a48c840100c1a2769731eb24260004fc648e29f510e100b6bd28301ef" gracePeriod=2 Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.055446 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvhmq" Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.062364 4745 generic.go:334] "Generic (PLEG): container finished" podID="6201ed46-afc3-4a95-9f79-c3d66161254a" containerID="9097e56a48c840100c1a2769731eb24260004fc648e29f510e100b6bd28301ef" exitCode=0 Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.062420 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvhmq" event={"ID":"6201ed46-afc3-4a95-9f79-c3d66161254a","Type":"ContainerDied","Data":"9097e56a48c840100c1a2769731eb24260004fc648e29f510e100b6bd28301ef"} Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.062469 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvhmq" event={"ID":"6201ed46-afc3-4a95-9f79-c3d66161254a","Type":"ContainerDied","Data":"57fd989268de5e8677dbc34f37d5a8e2799c85bd4a8945975aa92ea7b094abaa"} Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.062499 4745 scope.go:117] "RemoveContainer" containerID="9097e56a48c840100c1a2769731eb24260004fc648e29f510e100b6bd28301ef" Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.062425 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvhmq" Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.086707 4745 scope.go:117] "RemoveContainer" containerID="9487684eeac9bfd0d4ab13495f76876ec9f15cdd5e6431c1ef823ecd4b2e1ce5" Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.117571 4745 scope.go:117] "RemoveContainer" containerID="befce33591d3e6dadd90752e9088f24ae63a39dd78039d61a687d9d8f82e04b3" Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.130766 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6201ed46-afc3-4a95-9f79-c3d66161254a-utilities\") pod \"6201ed46-afc3-4a95-9f79-c3d66161254a\" (UID: \"6201ed46-afc3-4a95-9f79-c3d66161254a\") " Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.130836 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb7dp\" (UniqueName: \"kubernetes.io/projected/6201ed46-afc3-4a95-9f79-c3d66161254a-kube-api-access-cb7dp\") pod \"6201ed46-afc3-4a95-9f79-c3d66161254a\" (UID: \"6201ed46-afc3-4a95-9f79-c3d66161254a\") " Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.130905 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6201ed46-afc3-4a95-9f79-c3d66161254a-catalog-content\") pod \"6201ed46-afc3-4a95-9f79-c3d66161254a\" (UID: \"6201ed46-afc3-4a95-9f79-c3d66161254a\") " Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.131738 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6201ed46-afc3-4a95-9f79-c3d66161254a-utilities" (OuterVolumeSpecName: "utilities") pod "6201ed46-afc3-4a95-9f79-c3d66161254a" (UID: "6201ed46-afc3-4a95-9f79-c3d66161254a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.133481 4745 scope.go:117] "RemoveContainer" containerID="9097e56a48c840100c1a2769731eb24260004fc648e29f510e100b6bd28301ef" Jan 27 13:13:01 crc kubenswrapper[4745]: E0127 13:13:01.133917 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9097e56a48c840100c1a2769731eb24260004fc648e29f510e100b6bd28301ef\": container with ID starting with 9097e56a48c840100c1a2769731eb24260004fc648e29f510e100b6bd28301ef not found: ID does not exist" containerID="9097e56a48c840100c1a2769731eb24260004fc648e29f510e100b6bd28301ef" Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.133943 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9097e56a48c840100c1a2769731eb24260004fc648e29f510e100b6bd28301ef"} err="failed to get container status \"9097e56a48c840100c1a2769731eb24260004fc648e29f510e100b6bd28301ef\": rpc error: code = NotFound desc = could not find container \"9097e56a48c840100c1a2769731eb24260004fc648e29f510e100b6bd28301ef\": container with ID starting with 9097e56a48c840100c1a2769731eb24260004fc648e29f510e100b6bd28301ef not found: ID does not exist" Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.133963 4745 scope.go:117] "RemoveContainer" containerID="9487684eeac9bfd0d4ab13495f76876ec9f15cdd5e6431c1ef823ecd4b2e1ce5" Jan 27 13:13:01 crc kubenswrapper[4745]: E0127 13:13:01.134295 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9487684eeac9bfd0d4ab13495f76876ec9f15cdd5e6431c1ef823ecd4b2e1ce5\": container with ID starting with 9487684eeac9bfd0d4ab13495f76876ec9f15cdd5e6431c1ef823ecd4b2e1ce5 not found: ID does not exist" containerID="9487684eeac9bfd0d4ab13495f76876ec9f15cdd5e6431c1ef823ecd4b2e1ce5" Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.134323 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9487684eeac9bfd0d4ab13495f76876ec9f15cdd5e6431c1ef823ecd4b2e1ce5"} err="failed to get container status \"9487684eeac9bfd0d4ab13495f76876ec9f15cdd5e6431c1ef823ecd4b2e1ce5\": rpc error: code = NotFound desc = could not find container \"9487684eeac9bfd0d4ab13495f76876ec9f15cdd5e6431c1ef823ecd4b2e1ce5\": container with ID starting with 9487684eeac9bfd0d4ab13495f76876ec9f15cdd5e6431c1ef823ecd4b2e1ce5 not found: ID does not exist" Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.134336 4745 scope.go:117] "RemoveContainer" containerID="befce33591d3e6dadd90752e9088f24ae63a39dd78039d61a687d9d8f82e04b3" Jan 27 13:13:01 crc kubenswrapper[4745]: E0127 13:13:01.134685 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"befce33591d3e6dadd90752e9088f24ae63a39dd78039d61a687d9d8f82e04b3\": container with ID starting with befce33591d3e6dadd90752e9088f24ae63a39dd78039d61a687d9d8f82e04b3 not found: ID does not exist" containerID="befce33591d3e6dadd90752e9088f24ae63a39dd78039d61a687d9d8f82e04b3" Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.134710 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"befce33591d3e6dadd90752e9088f24ae63a39dd78039d61a687d9d8f82e04b3"} err="failed to get container status \"befce33591d3e6dadd90752e9088f24ae63a39dd78039d61a687d9d8f82e04b3\": rpc error: code = NotFound desc = could not find container \"befce33591d3e6dadd90752e9088f24ae63a39dd78039d61a687d9d8f82e04b3\": container with ID starting with befce33591d3e6dadd90752e9088f24ae63a39dd78039d61a687d9d8f82e04b3 not found: ID does not exist" Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.153566 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6201ed46-afc3-4a95-9f79-c3d66161254a-kube-api-access-cb7dp" (OuterVolumeSpecName: "kube-api-access-cb7dp") pod "6201ed46-afc3-4a95-9f79-c3d66161254a" (UID: "6201ed46-afc3-4a95-9f79-c3d66161254a"). InnerVolumeSpecName "kube-api-access-cb7dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.232195 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6201ed46-afc3-4a95-9f79-c3d66161254a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.232224 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb7dp\" (UniqueName: \"kubernetes.io/projected/6201ed46-afc3-4a95-9f79-c3d66161254a-kube-api-access-cb7dp\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.245114 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6201ed46-afc3-4a95-9f79-c3d66161254a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6201ed46-afc3-4a95-9f79-c3d66161254a" (UID: "6201ed46-afc3-4a95-9f79-c3d66161254a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.333486 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6201ed46-afc3-4a95-9f79-c3d66161254a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.402306 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nvhmq"] Jan 27 13:13:01 crc kubenswrapper[4745]: I0127 13:13:01.406822 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nvhmq"] Jan 27 13:13:02 crc kubenswrapper[4745]: I0127 13:13:02.084261 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6201ed46-afc3-4a95-9f79-c3d66161254a" path="/var/lib/kubelet/pods/6201ed46-afc3-4a95-9f79-c3d66161254a/volumes" Jan 27 13:13:05 crc kubenswrapper[4745]: I0127 13:13:05.968288 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:13:05 crc kubenswrapper[4745]: I0127 13:13:05.968968 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:13:05 crc kubenswrapper[4745]: I0127 13:13:05.969047 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 13:13:05 crc kubenswrapper[4745]: I0127 13:13:05.970119 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d938299b6752fac083ad77a1edce2a879b62760f1a87f7148a931dd6e44908ee"} pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:13:05 crc kubenswrapper[4745]: I0127 13:13:05.970380 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" containerID="cri-o://d938299b6752fac083ad77a1edce2a879b62760f1a87f7148a931dd6e44908ee" gracePeriod=600 Jan 27 13:13:06 crc kubenswrapper[4745]: I0127 13:13:06.120884 4745 generic.go:334] "Generic (PLEG): container finished" podID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerID="d938299b6752fac083ad77a1edce2a879b62760f1a87f7148a931dd6e44908ee" exitCode=0 Jan 27 13:13:06 crc kubenswrapper[4745]: I0127 13:13:06.120934 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerDied","Data":"d938299b6752fac083ad77a1edce2a879b62760f1a87f7148a931dd6e44908ee"} Jan 27 13:13:06 crc kubenswrapper[4745]: I0127 13:13:06.120974 4745 scope.go:117] "RemoveContainer" containerID="45adf36cbd79f63970e2f67c8ed23a8c95876cdbd4fff4a6f0557c9b2ad0e048" Jan 27 13:13:07 crc kubenswrapper[4745]: I0127 13:13:07.133091 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerStarted","Data":"cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e"} Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.168261 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491995-jfgrs"] Jan 27 13:15:00 crc kubenswrapper[4745]: E0127 13:15:00.169361 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6201ed46-afc3-4a95-9f79-c3d66161254a" containerName="extract-content" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.169378 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="6201ed46-afc3-4a95-9f79-c3d66161254a" containerName="extract-content" Jan 27 13:15:00 crc kubenswrapper[4745]: E0127 13:15:00.169421 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa1aaa7-bc00-4be9-8b36-6834b6a24b79" containerName="registry-server" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.169431 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa1aaa7-bc00-4be9-8b36-6834b6a24b79" containerName="registry-server" Jan 27 13:15:00 crc kubenswrapper[4745]: E0127 13:15:00.169447 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa1aaa7-bc00-4be9-8b36-6834b6a24b79" containerName="extract-utilities" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.169455 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa1aaa7-bc00-4be9-8b36-6834b6a24b79" containerName="extract-utilities" Jan 27 13:15:00 crc kubenswrapper[4745]: E0127 13:15:00.169490 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6201ed46-afc3-4a95-9f79-c3d66161254a" containerName="extract-utilities" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.169497 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="6201ed46-afc3-4a95-9f79-c3d66161254a" containerName="extract-utilities" Jan 27 13:15:00 crc kubenswrapper[4745]: E0127 13:15:00.169511 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa1aaa7-bc00-4be9-8b36-6834b6a24b79" containerName="extract-content" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.169519 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa1aaa7-bc00-4be9-8b36-6834b6a24b79" containerName="extract-content" Jan 27 13:15:00 crc kubenswrapper[4745]: E0127 13:15:00.169529 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6201ed46-afc3-4a95-9f79-c3d66161254a" containerName="registry-server" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.169536 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="6201ed46-afc3-4a95-9f79-c3d66161254a" containerName="registry-server" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.169724 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="6201ed46-afc3-4a95-9f79-c3d66161254a" containerName="registry-server" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.169795 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa1aaa7-bc00-4be9-8b36-6834b6a24b79" containerName="registry-server" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.170303 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-jfgrs" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.173213 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.173967 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.188641 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491995-jfgrs"] Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.223524 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0588c2f4-eccc-4c3f-b4bb-4294764652b7-config-volume\") pod \"collect-profiles-29491995-jfgrs\" (UID: \"0588c2f4-eccc-4c3f-b4bb-4294764652b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-jfgrs" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.223828 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0588c2f4-eccc-4c3f-b4bb-4294764652b7-secret-volume\") pod \"collect-profiles-29491995-jfgrs\" (UID: \"0588c2f4-eccc-4c3f-b4bb-4294764652b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-jfgrs" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.223880 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x272b\" (UniqueName: \"kubernetes.io/projected/0588c2f4-eccc-4c3f-b4bb-4294764652b7-kube-api-access-x272b\") pod \"collect-profiles-29491995-jfgrs\" (UID: \"0588c2f4-eccc-4c3f-b4bb-4294764652b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-jfgrs" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.325243 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0588c2f4-eccc-4c3f-b4bb-4294764652b7-config-volume\") pod \"collect-profiles-29491995-jfgrs\" (UID: \"0588c2f4-eccc-4c3f-b4bb-4294764652b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-jfgrs" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.325361 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0588c2f4-eccc-4c3f-b4bb-4294764652b7-secret-volume\") pod \"collect-profiles-29491995-jfgrs\" (UID: \"0588c2f4-eccc-4c3f-b4bb-4294764652b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-jfgrs" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.325389 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x272b\" (UniqueName: \"kubernetes.io/projected/0588c2f4-eccc-4c3f-b4bb-4294764652b7-kube-api-access-x272b\") pod \"collect-profiles-29491995-jfgrs\" (UID: \"0588c2f4-eccc-4c3f-b4bb-4294764652b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-jfgrs" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.326257 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0588c2f4-eccc-4c3f-b4bb-4294764652b7-config-volume\") pod \"collect-profiles-29491995-jfgrs\" (UID: \"0588c2f4-eccc-4c3f-b4bb-4294764652b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-jfgrs" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.333063 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0588c2f4-eccc-4c3f-b4bb-4294764652b7-secret-volume\") pod \"collect-profiles-29491995-jfgrs\" (UID: \"0588c2f4-eccc-4c3f-b4bb-4294764652b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-jfgrs" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.342680 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x272b\" (UniqueName: \"kubernetes.io/projected/0588c2f4-eccc-4c3f-b4bb-4294764652b7-kube-api-access-x272b\") pod \"collect-profiles-29491995-jfgrs\" (UID: \"0588c2f4-eccc-4c3f-b4bb-4294764652b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-jfgrs" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.490095 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-jfgrs" Jan 27 13:15:00 crc kubenswrapper[4745]: I0127 13:15:00.972513 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491995-jfgrs"] Jan 27 13:15:00 crc kubenswrapper[4745]: W0127 13:15:00.982355 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0588c2f4_eccc_4c3f_b4bb_4294764652b7.slice/crio-86cff11e1babe8f19a25cd56fbc9eb4cb77cd1a2ca2d0bf4f9490e0a775ac0ed WatchSource:0}: Error finding container 86cff11e1babe8f19a25cd56fbc9eb4cb77cd1a2ca2d0bf4f9490e0a775ac0ed: Status 404 returned error can't find the container with id 86cff11e1babe8f19a25cd56fbc9eb4cb77cd1a2ca2d0bf4f9490e0a775ac0ed Jan 27 13:15:01 crc kubenswrapper[4745]: I0127 13:15:01.023733 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-jfgrs" event={"ID":"0588c2f4-eccc-4c3f-b4bb-4294764652b7","Type":"ContainerStarted","Data":"86cff11e1babe8f19a25cd56fbc9eb4cb77cd1a2ca2d0bf4f9490e0a775ac0ed"} Jan 27 13:15:02 crc kubenswrapper[4745]: I0127 13:15:02.033886 4745 generic.go:334] "Generic (PLEG): container finished" podID="0588c2f4-eccc-4c3f-b4bb-4294764652b7" containerID="aef7190abdf7665a83ddddb586de5ab9da9db97fdb49f2b886c8b10968eb1beb" exitCode=0 Jan 27 13:15:02 crc kubenswrapper[4745]: I0127 13:15:02.034127 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-jfgrs" event={"ID":"0588c2f4-eccc-4c3f-b4bb-4294764652b7","Type":"ContainerDied","Data":"aef7190abdf7665a83ddddb586de5ab9da9db97fdb49f2b886c8b10968eb1beb"} Jan 27 13:15:03 crc kubenswrapper[4745]: I0127 13:15:03.438636 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-jfgrs" Jan 27 13:15:03 crc kubenswrapper[4745]: I0127 13:15:03.573565 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x272b\" (UniqueName: \"kubernetes.io/projected/0588c2f4-eccc-4c3f-b4bb-4294764652b7-kube-api-access-x272b\") pod \"0588c2f4-eccc-4c3f-b4bb-4294764652b7\" (UID: \"0588c2f4-eccc-4c3f-b4bb-4294764652b7\") " Jan 27 13:15:03 crc kubenswrapper[4745]: I0127 13:15:03.573804 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0588c2f4-eccc-4c3f-b4bb-4294764652b7-config-volume\") pod \"0588c2f4-eccc-4c3f-b4bb-4294764652b7\" (UID: \"0588c2f4-eccc-4c3f-b4bb-4294764652b7\") " Jan 27 13:15:03 crc kubenswrapper[4745]: I0127 13:15:03.573981 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0588c2f4-eccc-4c3f-b4bb-4294764652b7-secret-volume\") pod \"0588c2f4-eccc-4c3f-b4bb-4294764652b7\" (UID: \"0588c2f4-eccc-4c3f-b4bb-4294764652b7\") " Jan 27 13:15:03 crc kubenswrapper[4745]: I0127 13:15:03.574651 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0588c2f4-eccc-4c3f-b4bb-4294764652b7-config-volume" (OuterVolumeSpecName: "config-volume") pod "0588c2f4-eccc-4c3f-b4bb-4294764652b7" (UID: "0588c2f4-eccc-4c3f-b4bb-4294764652b7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:15:03 crc kubenswrapper[4745]: I0127 13:15:03.581639 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0588c2f4-eccc-4c3f-b4bb-4294764652b7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0588c2f4-eccc-4c3f-b4bb-4294764652b7" (UID: "0588c2f4-eccc-4c3f-b4bb-4294764652b7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:15:03 crc kubenswrapper[4745]: I0127 13:15:03.582251 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0588c2f4-eccc-4c3f-b4bb-4294764652b7-kube-api-access-x272b" (OuterVolumeSpecName: "kube-api-access-x272b") pod "0588c2f4-eccc-4c3f-b4bb-4294764652b7" (UID: "0588c2f4-eccc-4c3f-b4bb-4294764652b7"). InnerVolumeSpecName "kube-api-access-x272b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:15:03 crc kubenswrapper[4745]: I0127 13:15:03.675772 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x272b\" (UniqueName: \"kubernetes.io/projected/0588c2f4-eccc-4c3f-b4bb-4294764652b7-kube-api-access-x272b\") on node \"crc\" DevicePath \"\"" Jan 27 13:15:03 crc kubenswrapper[4745]: I0127 13:15:03.675900 4745 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0588c2f4-eccc-4c3f-b4bb-4294764652b7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 13:15:03 crc kubenswrapper[4745]: I0127 13:15:03.675922 4745 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0588c2f4-eccc-4c3f-b4bb-4294764652b7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 13:15:04 crc kubenswrapper[4745]: I0127 13:15:04.047801 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-jfgrs" event={"ID":"0588c2f4-eccc-4c3f-b4bb-4294764652b7","Type":"ContainerDied","Data":"86cff11e1babe8f19a25cd56fbc9eb4cb77cd1a2ca2d0bf4f9490e0a775ac0ed"} Jan 27 13:15:04 crc kubenswrapper[4745]: I0127 13:15:04.048182 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86cff11e1babe8f19a25cd56fbc9eb4cb77cd1a2ca2d0bf4f9490e0a775ac0ed" Jan 27 13:15:04 crc kubenswrapper[4745]: I0127 13:15:04.047874 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-jfgrs" Jan 27 13:15:04 crc kubenswrapper[4745]: I0127 13:15:04.507648 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491950-2k8q9"] Jan 27 13:15:04 crc kubenswrapper[4745]: I0127 13:15:04.512381 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491950-2k8q9"] Jan 27 13:15:06 crc kubenswrapper[4745]: I0127 13:15:06.085456 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="362ff88a-01a5-46fe-8c12-15247d5b2028" path="/var/lib/kubelet/pods/362ff88a-01a5-46fe-8c12-15247d5b2028/volumes" Jan 27 13:15:35 crc kubenswrapper[4745]: I0127 13:15:35.967469 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:15:35 crc kubenswrapper[4745]: I0127 13:15:35.967988 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:15:57 crc kubenswrapper[4745]: I0127 13:15:57.017042 4745 scope.go:117] "RemoveContainer" containerID="d43360d1d9c77a0c77ff613bcb9789449819420594f5ea8c58431d7b9ab0fa12" Jan 27 13:16:05 crc kubenswrapper[4745]: I0127 13:16:05.967843 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:16:05 crc kubenswrapper[4745]: I0127 13:16:05.968444 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:16:35 crc kubenswrapper[4745]: I0127 13:16:35.967953 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:16:35 crc kubenswrapper[4745]: I0127 13:16:35.969012 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:16:35 crc kubenswrapper[4745]: I0127 13:16:35.969091 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 13:16:35 crc kubenswrapper[4745]: I0127 13:16:35.970164 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e"} pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:16:35 crc kubenswrapper[4745]: I0127 13:16:35.970252 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" containerID="cri-o://cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" gracePeriod=600 Jan 27 13:16:36 crc kubenswrapper[4745]: E0127 13:16:36.091543 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:16:36 crc kubenswrapper[4745]: I0127 13:16:36.759334 4745 generic.go:334] "Generic (PLEG): container finished" podID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" exitCode=0 Jan 27 13:16:36 crc kubenswrapper[4745]: I0127 13:16:36.759393 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerDied","Data":"cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e"} Jan 27 13:16:36 crc kubenswrapper[4745]: I0127 13:16:36.759438 4745 scope.go:117] "RemoveContainer" containerID="d938299b6752fac083ad77a1edce2a879b62760f1a87f7148a931dd6e44908ee" Jan 27 13:16:36 crc kubenswrapper[4745]: I0127 13:16:36.760099 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:16:36 crc kubenswrapper[4745]: E0127 13:16:36.760678 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:16:52 crc kubenswrapper[4745]: I0127 13:16:52.074348 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:16:52 crc kubenswrapper[4745]: E0127 13:16:52.075184 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:17:05 crc kubenswrapper[4745]: I0127 13:17:05.073908 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:17:05 crc kubenswrapper[4745]: E0127 13:17:05.076281 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:17:19 crc kubenswrapper[4745]: I0127 13:17:19.073305 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:17:19 crc kubenswrapper[4745]: E0127 13:17:19.074011 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:17:33 crc kubenswrapper[4745]: I0127 13:17:33.073727 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:17:33 crc kubenswrapper[4745]: E0127 13:17:33.075507 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:17:45 crc kubenswrapper[4745]: I0127 13:17:45.074245 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:17:45 crc kubenswrapper[4745]: E0127 13:17:45.074755 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:17:58 crc kubenswrapper[4745]: I0127 13:17:58.075653 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:17:58 crc kubenswrapper[4745]: E0127 13:17:58.076509 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:18:10 crc kubenswrapper[4745]: I0127 13:18:10.073849 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:18:10 crc kubenswrapper[4745]: E0127 13:18:10.074445 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:18:25 crc kubenswrapper[4745]: I0127 13:18:25.073523 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:18:25 crc kubenswrapper[4745]: E0127 13:18:25.074296 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:18:40 crc kubenswrapper[4745]: I0127 13:18:40.074288 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:18:40 crc kubenswrapper[4745]: E0127 13:18:40.075017 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:18:53 crc kubenswrapper[4745]: I0127 13:18:53.074650 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:18:53 crc kubenswrapper[4745]: E0127 13:18:53.075479 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:19:04 crc kubenswrapper[4745]: I0127 13:19:04.074429 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:19:04 crc kubenswrapper[4745]: E0127 13:19:04.075143 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:19:17 crc kubenswrapper[4745]: I0127 13:19:17.074419 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:19:17 crc kubenswrapper[4745]: E0127 13:19:17.075289 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:19:30 crc kubenswrapper[4745]: I0127 13:19:30.074741 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:19:30 crc kubenswrapper[4745]: E0127 13:19:30.075464 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:19:45 crc kubenswrapper[4745]: I0127 13:19:45.073670 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:19:45 crc kubenswrapper[4745]: E0127 13:19:45.074398 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:20:00 crc kubenswrapper[4745]: I0127 13:20:00.074422 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:20:00 crc kubenswrapper[4745]: E0127 13:20:00.075453 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:20:12 crc kubenswrapper[4745]: I0127 13:20:12.074019 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:20:12 crc kubenswrapper[4745]: E0127 13:20:12.076951 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:20:25 crc kubenswrapper[4745]: I0127 13:20:25.074009 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:20:25 crc kubenswrapper[4745]: E0127 13:20:25.074694 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:20:37 crc kubenswrapper[4745]: I0127 13:20:37.074773 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:20:37 crc kubenswrapper[4745]: E0127 13:20:37.075519 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:20:49 crc kubenswrapper[4745]: I0127 13:20:49.074272 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:20:49 crc kubenswrapper[4745]: E0127 13:20:49.074953 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:21:04 crc kubenswrapper[4745]: I0127 13:21:04.074280 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:21:04 crc kubenswrapper[4745]: E0127 13:21:04.075123 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:21:16 crc kubenswrapper[4745]: I0127 13:21:16.074461 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:21:16 crc kubenswrapper[4745]: E0127 13:21:16.075163 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:21:28 crc kubenswrapper[4745]: I0127 13:21:28.078112 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:21:28 crc kubenswrapper[4745]: E0127 13:21:28.079210 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:21:40 crc kubenswrapper[4745]: I0127 13:21:40.073595 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:21:41 crc kubenswrapper[4745]: I0127 13:21:41.019992 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerStarted","Data":"325ea61030e1d83d18ee20378ba80ae8f24adeecdff6874e78ec7f630de1a9a0"} Jan 27 13:21:47 crc kubenswrapper[4745]: I0127 13:21:47.330095 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vbtn9"] Jan 27 13:21:47 crc kubenswrapper[4745]: E0127 13:21:47.330902 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0588c2f4-eccc-4c3f-b4bb-4294764652b7" containerName="collect-profiles" Jan 27 13:21:47 crc kubenswrapper[4745]: I0127 13:21:47.330914 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="0588c2f4-eccc-4c3f-b4bb-4294764652b7" containerName="collect-profiles" Jan 27 13:21:47 crc kubenswrapper[4745]: I0127 13:21:47.331069 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="0588c2f4-eccc-4c3f-b4bb-4294764652b7" containerName="collect-profiles" Jan 27 13:21:47 crc kubenswrapper[4745]: I0127 13:21:47.332077 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbtn9" Jan 27 13:21:47 crc kubenswrapper[4745]: I0127 13:21:47.366639 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbtn9"] Jan 27 13:21:47 crc kubenswrapper[4745]: I0127 13:21:47.531616 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c584973-c160-4fba-9e68-8f8f18cb1f88-utilities\") pod \"community-operators-vbtn9\" (UID: \"7c584973-c160-4fba-9e68-8f8f18cb1f88\") " pod="openshift-marketplace/community-operators-vbtn9" Jan 27 13:21:47 crc kubenswrapper[4745]: I0127 13:21:47.532256 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c584973-c160-4fba-9e68-8f8f18cb1f88-catalog-content\") pod \"community-operators-vbtn9\" (UID: \"7c584973-c160-4fba-9e68-8f8f18cb1f88\") " pod="openshift-marketplace/community-operators-vbtn9" Jan 27 13:21:47 crc kubenswrapper[4745]: I0127 13:21:47.532413 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwkth\" (UniqueName: \"kubernetes.io/projected/7c584973-c160-4fba-9e68-8f8f18cb1f88-kube-api-access-cwkth\") pod \"community-operators-vbtn9\" (UID: \"7c584973-c160-4fba-9e68-8f8f18cb1f88\") " pod="openshift-marketplace/community-operators-vbtn9" Jan 27 13:21:47 crc kubenswrapper[4745]: I0127 13:21:47.633413 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c584973-c160-4fba-9e68-8f8f18cb1f88-utilities\") pod \"community-operators-vbtn9\" (UID: \"7c584973-c160-4fba-9e68-8f8f18cb1f88\") " pod="openshift-marketplace/community-operators-vbtn9" Jan 27 13:21:47 crc kubenswrapper[4745]: I0127 13:21:47.633473 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c584973-c160-4fba-9e68-8f8f18cb1f88-catalog-content\") pod \"community-operators-vbtn9\" (UID: \"7c584973-c160-4fba-9e68-8f8f18cb1f88\") " pod="openshift-marketplace/community-operators-vbtn9" Jan 27 13:21:47 crc kubenswrapper[4745]: I0127 13:21:47.633563 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwkth\" (UniqueName: \"kubernetes.io/projected/7c584973-c160-4fba-9e68-8f8f18cb1f88-kube-api-access-cwkth\") pod \"community-operators-vbtn9\" (UID: \"7c584973-c160-4fba-9e68-8f8f18cb1f88\") " pod="openshift-marketplace/community-operators-vbtn9" Jan 27 13:21:47 crc kubenswrapper[4745]: I0127 13:21:47.634652 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c584973-c160-4fba-9e68-8f8f18cb1f88-utilities\") pod \"community-operators-vbtn9\" (UID: \"7c584973-c160-4fba-9e68-8f8f18cb1f88\") " pod="openshift-marketplace/community-operators-vbtn9" Jan 27 13:21:47 crc kubenswrapper[4745]: I0127 13:21:47.690706 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c584973-c160-4fba-9e68-8f8f18cb1f88-catalog-content\") pod \"community-operators-vbtn9\" (UID: \"7c584973-c160-4fba-9e68-8f8f18cb1f88\") " pod="openshift-marketplace/community-operators-vbtn9" Jan 27 13:21:47 crc kubenswrapper[4745]: I0127 13:21:47.716267 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwkth\" (UniqueName: \"kubernetes.io/projected/7c584973-c160-4fba-9e68-8f8f18cb1f88-kube-api-access-cwkth\") pod \"community-operators-vbtn9\" (UID: \"7c584973-c160-4fba-9e68-8f8f18cb1f88\") " pod="openshift-marketplace/community-operators-vbtn9" Jan 27 13:21:47 crc kubenswrapper[4745]: I0127 13:21:47.961257 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbtn9" Jan 27 13:21:48 crc kubenswrapper[4745]: I0127 13:21:48.389789 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbtn9"] Jan 27 13:21:49 crc kubenswrapper[4745]: I0127 13:21:49.095287 4745 generic.go:334] "Generic (PLEG): container finished" podID="7c584973-c160-4fba-9e68-8f8f18cb1f88" containerID="81976c07a022ed520f78cbc4538348995090344ce08e55c015642a7bb706e9d9" exitCode=0 Jan 27 13:21:49 crc kubenswrapper[4745]: I0127 13:21:49.095340 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbtn9" event={"ID":"7c584973-c160-4fba-9e68-8f8f18cb1f88","Type":"ContainerDied","Data":"81976c07a022ed520f78cbc4538348995090344ce08e55c015642a7bb706e9d9"} Jan 27 13:21:49 crc kubenswrapper[4745]: I0127 13:21:49.095648 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbtn9" event={"ID":"7c584973-c160-4fba-9e68-8f8f18cb1f88","Type":"ContainerStarted","Data":"e7b10c92b5efae7d8c69d7ebca36e9b9d82781ec481929c7cda2a49728ed5c4e"} Jan 27 13:21:49 crc kubenswrapper[4745]: I0127 13:21:49.097167 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 13:21:50 crc kubenswrapper[4745]: I0127 13:21:50.121500 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbtn9" event={"ID":"7c584973-c160-4fba-9e68-8f8f18cb1f88","Type":"ContainerStarted","Data":"b8a2588a66bac33444e79c3cf405e7b5bc7a386616eebbc7e6fd77db74e69928"} Jan 27 13:21:51 crc kubenswrapper[4745]: I0127 13:21:51.130471 4745 generic.go:334] "Generic (PLEG): container finished" podID="7c584973-c160-4fba-9e68-8f8f18cb1f88" containerID="b8a2588a66bac33444e79c3cf405e7b5bc7a386616eebbc7e6fd77db74e69928" exitCode=0 Jan 27 13:21:51 crc kubenswrapper[4745]: I0127 13:21:51.130542 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbtn9" event={"ID":"7c584973-c160-4fba-9e68-8f8f18cb1f88","Type":"ContainerDied","Data":"b8a2588a66bac33444e79c3cf405e7b5bc7a386616eebbc7e6fd77db74e69928"} Jan 27 13:21:52 crc kubenswrapper[4745]: I0127 13:21:52.139547 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbtn9" event={"ID":"7c584973-c160-4fba-9e68-8f8f18cb1f88","Type":"ContainerStarted","Data":"d5a039e7010aaf7eb4feb79e075cf4bfa08e26a5ff1f31dcfa6b4bf90dddb0ce"} Jan 27 13:21:52 crc kubenswrapper[4745]: I0127 13:21:52.162960 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vbtn9" podStartSLOduration=2.737660928 podStartE2EDuration="5.162939778s" podCreationTimestamp="2026-01-27 13:21:47 +0000 UTC" firstStartedPulling="2026-01-27 13:21:49.096758871 +0000 UTC m=+4201.901669559" lastFinishedPulling="2026-01-27 13:21:51.522037721 +0000 UTC m=+4204.326948409" observedRunningTime="2026-01-27 13:21:52.154785025 +0000 UTC m=+4204.959695713" watchObservedRunningTime="2026-01-27 13:21:52.162939778 +0000 UTC m=+4204.967850476" Jan 27 13:21:57 crc kubenswrapper[4745]: I0127 13:21:57.961953 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vbtn9" Jan 27 13:21:57 crc kubenswrapper[4745]: I0127 13:21:57.962533 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vbtn9" Jan 27 13:21:58 crc kubenswrapper[4745]: I0127 13:21:58.009333 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vbtn9" Jan 27 13:21:58 crc kubenswrapper[4745]: I0127 13:21:58.219361 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vbtn9" Jan 27 13:21:58 crc kubenswrapper[4745]: I0127 13:21:58.276379 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbtn9"] Jan 27 13:22:00 crc kubenswrapper[4745]: I0127 13:22:00.198845 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vbtn9" podUID="7c584973-c160-4fba-9e68-8f8f18cb1f88" containerName="registry-server" containerID="cri-o://d5a039e7010aaf7eb4feb79e075cf4bfa08e26a5ff1f31dcfa6b4bf90dddb0ce" gracePeriod=2 Jan 27 13:22:01 crc kubenswrapper[4745]: I0127 13:22:01.975662 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbtn9" Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.055746 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c584973-c160-4fba-9e68-8f8f18cb1f88-catalog-content\") pod \"7c584973-c160-4fba-9e68-8f8f18cb1f88\" (UID: \"7c584973-c160-4fba-9e68-8f8f18cb1f88\") " Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.056078 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwkth\" (UniqueName: \"kubernetes.io/projected/7c584973-c160-4fba-9e68-8f8f18cb1f88-kube-api-access-cwkth\") pod \"7c584973-c160-4fba-9e68-8f8f18cb1f88\" (UID: \"7c584973-c160-4fba-9e68-8f8f18cb1f88\") " Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.056108 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c584973-c160-4fba-9e68-8f8f18cb1f88-utilities\") pod \"7c584973-c160-4fba-9e68-8f8f18cb1f88\" (UID: \"7c584973-c160-4fba-9e68-8f8f18cb1f88\") " Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.056976 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c584973-c160-4fba-9e68-8f8f18cb1f88-utilities" (OuterVolumeSpecName: "utilities") pod "7c584973-c160-4fba-9e68-8f8f18cb1f88" (UID: "7c584973-c160-4fba-9e68-8f8f18cb1f88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.063088 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c584973-c160-4fba-9e68-8f8f18cb1f88-kube-api-access-cwkth" (OuterVolumeSpecName: "kube-api-access-cwkth") pod "7c584973-c160-4fba-9e68-8f8f18cb1f88" (UID: "7c584973-c160-4fba-9e68-8f8f18cb1f88"). InnerVolumeSpecName "kube-api-access-cwkth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.119065 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c584973-c160-4fba-9e68-8f8f18cb1f88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c584973-c160-4fba-9e68-8f8f18cb1f88" (UID: "7c584973-c160-4fba-9e68-8f8f18cb1f88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.164677 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c584973-c160-4fba-9e68-8f8f18cb1f88-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.165064 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c584973-c160-4fba-9e68-8f8f18cb1f88-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.165219 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwkth\" (UniqueName: \"kubernetes.io/projected/7c584973-c160-4fba-9e68-8f8f18cb1f88-kube-api-access-cwkth\") on node \"crc\" DevicePath \"\"" Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.216711 4745 generic.go:334] "Generic (PLEG): container finished" podID="7c584973-c160-4fba-9e68-8f8f18cb1f88" containerID="d5a039e7010aaf7eb4feb79e075cf4bfa08e26a5ff1f31dcfa6b4bf90dddb0ce" exitCode=0 Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.216770 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbtn9" event={"ID":"7c584973-c160-4fba-9e68-8f8f18cb1f88","Type":"ContainerDied","Data":"d5a039e7010aaf7eb4feb79e075cf4bfa08e26a5ff1f31dcfa6b4bf90dddb0ce"} Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.216782 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbtn9" Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.216837 4745 scope.go:117] "RemoveContainer" containerID="d5a039e7010aaf7eb4feb79e075cf4bfa08e26a5ff1f31dcfa6b4bf90dddb0ce" Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.216803 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbtn9" event={"ID":"7c584973-c160-4fba-9e68-8f8f18cb1f88","Type":"ContainerDied","Data":"e7b10c92b5efae7d8c69d7ebca36e9b9d82781ec481929c7cda2a49728ed5c4e"} Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.241339 4745 scope.go:117] "RemoveContainer" containerID="b8a2588a66bac33444e79c3cf405e7b5bc7a386616eebbc7e6fd77db74e69928" Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.299876 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbtn9"] Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.299982 4745 scope.go:117] "RemoveContainer" containerID="81976c07a022ed520f78cbc4538348995090344ce08e55c015642a7bb706e9d9" Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.319108 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vbtn9"] Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.360823 4745 scope.go:117] "RemoveContainer" containerID="d5a039e7010aaf7eb4feb79e075cf4bfa08e26a5ff1f31dcfa6b4bf90dddb0ce" Jan 27 13:22:02 crc kubenswrapper[4745]: E0127 13:22:02.361408 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5a039e7010aaf7eb4feb79e075cf4bfa08e26a5ff1f31dcfa6b4bf90dddb0ce\": container with ID starting with d5a039e7010aaf7eb4feb79e075cf4bfa08e26a5ff1f31dcfa6b4bf90dddb0ce not found: ID does not exist" containerID="d5a039e7010aaf7eb4feb79e075cf4bfa08e26a5ff1f31dcfa6b4bf90dddb0ce" Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.361445 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5a039e7010aaf7eb4feb79e075cf4bfa08e26a5ff1f31dcfa6b4bf90dddb0ce"} err="failed to get container status \"d5a039e7010aaf7eb4feb79e075cf4bfa08e26a5ff1f31dcfa6b4bf90dddb0ce\": rpc error: code = NotFound desc = could not find container \"d5a039e7010aaf7eb4feb79e075cf4bfa08e26a5ff1f31dcfa6b4bf90dddb0ce\": container with ID starting with d5a039e7010aaf7eb4feb79e075cf4bfa08e26a5ff1f31dcfa6b4bf90dddb0ce not found: ID does not exist" Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.361489 4745 scope.go:117] "RemoveContainer" containerID="b8a2588a66bac33444e79c3cf405e7b5bc7a386616eebbc7e6fd77db74e69928" Jan 27 13:22:02 crc kubenswrapper[4745]: E0127 13:22:02.361984 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8a2588a66bac33444e79c3cf405e7b5bc7a386616eebbc7e6fd77db74e69928\": container with ID starting with b8a2588a66bac33444e79c3cf405e7b5bc7a386616eebbc7e6fd77db74e69928 not found: ID does not exist" containerID="b8a2588a66bac33444e79c3cf405e7b5bc7a386616eebbc7e6fd77db74e69928" Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.362014 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8a2588a66bac33444e79c3cf405e7b5bc7a386616eebbc7e6fd77db74e69928"} err="failed to get container status \"b8a2588a66bac33444e79c3cf405e7b5bc7a386616eebbc7e6fd77db74e69928\": rpc error: code = NotFound desc = could not find container \"b8a2588a66bac33444e79c3cf405e7b5bc7a386616eebbc7e6fd77db74e69928\": container with ID starting with b8a2588a66bac33444e79c3cf405e7b5bc7a386616eebbc7e6fd77db74e69928 not found: ID does not exist" Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.362035 4745 scope.go:117] "RemoveContainer" containerID="81976c07a022ed520f78cbc4538348995090344ce08e55c015642a7bb706e9d9" Jan 27 13:22:02 crc kubenswrapper[4745]: E0127 13:22:02.362911 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81976c07a022ed520f78cbc4538348995090344ce08e55c015642a7bb706e9d9\": container with ID starting with 81976c07a022ed520f78cbc4538348995090344ce08e55c015642a7bb706e9d9 not found: ID does not exist" containerID="81976c07a022ed520f78cbc4538348995090344ce08e55c015642a7bb706e9d9" Jan 27 13:22:02 crc kubenswrapper[4745]: I0127 13:22:02.362937 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81976c07a022ed520f78cbc4538348995090344ce08e55c015642a7bb706e9d9"} err="failed to get container status \"81976c07a022ed520f78cbc4538348995090344ce08e55c015642a7bb706e9d9\": rpc error: code = NotFound desc = could not find container \"81976c07a022ed520f78cbc4538348995090344ce08e55c015642a7bb706e9d9\": container with ID starting with 81976c07a022ed520f78cbc4538348995090344ce08e55c015642a7bb706e9d9 not found: ID does not exist" Jan 27 13:22:04 crc kubenswrapper[4745]: I0127 13:22:04.082473 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c584973-c160-4fba-9e68-8f8f18cb1f88" path="/var/lib/kubelet/pods/7c584973-c160-4fba-9e68-8f8f18cb1f88/volumes" Jan 27 13:23:02 crc kubenswrapper[4745]: I0127 13:23:02.742331 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qgz4q"] Jan 27 13:23:02 crc kubenswrapper[4745]: E0127 13:23:02.743210 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c584973-c160-4fba-9e68-8f8f18cb1f88" containerName="registry-server" Jan 27 13:23:02 crc kubenswrapper[4745]: I0127 13:23:02.743225 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c584973-c160-4fba-9e68-8f8f18cb1f88" containerName="registry-server" Jan 27 13:23:02 crc kubenswrapper[4745]: E0127 13:23:02.743251 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c584973-c160-4fba-9e68-8f8f18cb1f88" containerName="extract-utilities" Jan 27 13:23:02 crc kubenswrapper[4745]: I0127 13:23:02.743259 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c584973-c160-4fba-9e68-8f8f18cb1f88" containerName="extract-utilities" Jan 27 13:23:02 crc kubenswrapper[4745]: E0127 13:23:02.743279 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c584973-c160-4fba-9e68-8f8f18cb1f88" containerName="extract-content" Jan 27 13:23:02 crc kubenswrapper[4745]: I0127 13:23:02.743287 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c584973-c160-4fba-9e68-8f8f18cb1f88" containerName="extract-content" Jan 27 13:23:02 crc kubenswrapper[4745]: I0127 13:23:02.743464 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c584973-c160-4fba-9e68-8f8f18cb1f88" containerName="registry-server" Jan 27 13:23:02 crc kubenswrapper[4745]: I0127 13:23:02.744791 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qgz4q" Jan 27 13:23:02 crc kubenswrapper[4745]: I0127 13:23:02.767627 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgz4q"] Jan 27 13:23:02 crc kubenswrapper[4745]: I0127 13:23:02.794799 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db031e9-bf69-4226-8ac1-cc5029be27d2-catalog-content\") pod \"redhat-marketplace-qgz4q\" (UID: \"3db031e9-bf69-4226-8ac1-cc5029be27d2\") " pod="openshift-marketplace/redhat-marketplace-qgz4q" Jan 27 13:23:02 crc kubenswrapper[4745]: I0127 13:23:02.794933 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr4z5\" (UniqueName: \"kubernetes.io/projected/3db031e9-bf69-4226-8ac1-cc5029be27d2-kube-api-access-dr4z5\") pod \"redhat-marketplace-qgz4q\" (UID: \"3db031e9-bf69-4226-8ac1-cc5029be27d2\") " pod="openshift-marketplace/redhat-marketplace-qgz4q" Jan 27 13:23:02 crc kubenswrapper[4745]: I0127 13:23:02.794972 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db031e9-bf69-4226-8ac1-cc5029be27d2-utilities\") pod \"redhat-marketplace-qgz4q\" (UID: \"3db031e9-bf69-4226-8ac1-cc5029be27d2\") " pod="openshift-marketplace/redhat-marketplace-qgz4q" Jan 27 13:23:02 crc kubenswrapper[4745]: I0127 13:23:02.896444 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db031e9-bf69-4226-8ac1-cc5029be27d2-catalog-content\") pod \"redhat-marketplace-qgz4q\" (UID: \"3db031e9-bf69-4226-8ac1-cc5029be27d2\") " pod="openshift-marketplace/redhat-marketplace-qgz4q" Jan 27 13:23:02 crc kubenswrapper[4745]: I0127 13:23:02.896507 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr4z5\" (UniqueName: \"kubernetes.io/projected/3db031e9-bf69-4226-8ac1-cc5029be27d2-kube-api-access-dr4z5\") pod \"redhat-marketplace-qgz4q\" (UID: \"3db031e9-bf69-4226-8ac1-cc5029be27d2\") " pod="openshift-marketplace/redhat-marketplace-qgz4q" Jan 27 13:23:02 crc kubenswrapper[4745]: I0127 13:23:02.896527 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db031e9-bf69-4226-8ac1-cc5029be27d2-utilities\") pod \"redhat-marketplace-qgz4q\" (UID: \"3db031e9-bf69-4226-8ac1-cc5029be27d2\") " pod="openshift-marketplace/redhat-marketplace-qgz4q" Jan 27 13:23:02 crc kubenswrapper[4745]: I0127 13:23:02.896922 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db031e9-bf69-4226-8ac1-cc5029be27d2-catalog-content\") pod \"redhat-marketplace-qgz4q\" (UID: \"3db031e9-bf69-4226-8ac1-cc5029be27d2\") " pod="openshift-marketplace/redhat-marketplace-qgz4q" Jan 27 13:23:02 crc kubenswrapper[4745]: I0127 13:23:02.896934 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db031e9-bf69-4226-8ac1-cc5029be27d2-utilities\") pod \"redhat-marketplace-qgz4q\" (UID: \"3db031e9-bf69-4226-8ac1-cc5029be27d2\") " pod="openshift-marketplace/redhat-marketplace-qgz4q" Jan 27 13:23:02 crc kubenswrapper[4745]: I0127 13:23:02.927739 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr4z5\" (UniqueName: \"kubernetes.io/projected/3db031e9-bf69-4226-8ac1-cc5029be27d2-kube-api-access-dr4z5\") pod \"redhat-marketplace-qgz4q\" (UID: \"3db031e9-bf69-4226-8ac1-cc5029be27d2\") " pod="openshift-marketplace/redhat-marketplace-qgz4q" Jan 27 13:23:03 crc kubenswrapper[4745]: I0127 13:23:03.065130 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qgz4q" Jan 27 13:23:03 crc kubenswrapper[4745]: I0127 13:23:03.331290 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qb9kp"] Jan 27 13:23:03 crc kubenswrapper[4745]: I0127 13:23:03.333333 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qb9kp" Jan 27 13:23:03 crc kubenswrapper[4745]: I0127 13:23:03.340484 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qb9kp"] Jan 27 13:23:03 crc kubenswrapper[4745]: I0127 13:23:03.404162 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd2510f8-8db6-46f9-802b-36b0ec0a84e4-catalog-content\") pod \"certified-operators-qb9kp\" (UID: \"fd2510f8-8db6-46f9-802b-36b0ec0a84e4\") " pod="openshift-marketplace/certified-operators-qb9kp" Jan 27 13:23:03 crc kubenswrapper[4745]: I0127 13:23:03.404267 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mv8j\" (UniqueName: \"kubernetes.io/projected/fd2510f8-8db6-46f9-802b-36b0ec0a84e4-kube-api-access-7mv8j\") pod \"certified-operators-qb9kp\" (UID: \"fd2510f8-8db6-46f9-802b-36b0ec0a84e4\") " pod="openshift-marketplace/certified-operators-qb9kp" Jan 27 13:23:03 crc kubenswrapper[4745]: I0127 13:23:03.404323 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd2510f8-8db6-46f9-802b-36b0ec0a84e4-utilities\") pod \"certified-operators-qb9kp\" (UID: \"fd2510f8-8db6-46f9-802b-36b0ec0a84e4\") " pod="openshift-marketplace/certified-operators-qb9kp" Jan 27 13:23:03 crc kubenswrapper[4745]: I0127 13:23:03.487161 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgz4q"] Jan 27 13:23:03 crc kubenswrapper[4745]: I0127 13:23:03.511638 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd2510f8-8db6-46f9-802b-36b0ec0a84e4-utilities\") pod \"certified-operators-qb9kp\" (UID: \"fd2510f8-8db6-46f9-802b-36b0ec0a84e4\") " pod="openshift-marketplace/certified-operators-qb9kp" Jan 27 13:23:03 crc kubenswrapper[4745]: I0127 13:23:03.511733 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd2510f8-8db6-46f9-802b-36b0ec0a84e4-catalog-content\") pod \"certified-operators-qb9kp\" (UID: \"fd2510f8-8db6-46f9-802b-36b0ec0a84e4\") " pod="openshift-marketplace/certified-operators-qb9kp" Jan 27 13:23:03 crc kubenswrapper[4745]: I0127 13:23:03.511780 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mv8j\" (UniqueName: \"kubernetes.io/projected/fd2510f8-8db6-46f9-802b-36b0ec0a84e4-kube-api-access-7mv8j\") pod \"certified-operators-qb9kp\" (UID: \"fd2510f8-8db6-46f9-802b-36b0ec0a84e4\") " pod="openshift-marketplace/certified-operators-qb9kp" Jan 27 13:23:03 crc kubenswrapper[4745]: I0127 13:23:03.512171 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd2510f8-8db6-46f9-802b-36b0ec0a84e4-utilities\") pod \"certified-operators-qb9kp\" (UID: \"fd2510f8-8db6-46f9-802b-36b0ec0a84e4\") " pod="openshift-marketplace/certified-operators-qb9kp" Jan 27 13:23:03 crc kubenswrapper[4745]: I0127 13:23:03.512226 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd2510f8-8db6-46f9-802b-36b0ec0a84e4-catalog-content\") pod \"certified-operators-qb9kp\" (UID: \"fd2510f8-8db6-46f9-802b-36b0ec0a84e4\") " pod="openshift-marketplace/certified-operators-qb9kp" Jan 27 13:23:03 crc kubenswrapper[4745]: I0127 13:23:03.531229 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mv8j\" (UniqueName: \"kubernetes.io/projected/fd2510f8-8db6-46f9-802b-36b0ec0a84e4-kube-api-access-7mv8j\") pod \"certified-operators-qb9kp\" (UID: \"fd2510f8-8db6-46f9-802b-36b0ec0a84e4\") " pod="openshift-marketplace/certified-operators-qb9kp" Jan 27 13:23:03 crc kubenswrapper[4745]: I0127 13:23:03.654332 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qb9kp" Jan 27 13:23:03 crc kubenswrapper[4745]: I0127 13:23:03.705794 4745 generic.go:334] "Generic (PLEG): container finished" podID="3db031e9-bf69-4226-8ac1-cc5029be27d2" containerID="5ecd23ed7e80e70a42836f28694c876cd7120920f143b7685fb28e03390ea264" exitCode=0 Jan 27 13:23:03 crc kubenswrapper[4745]: I0127 13:23:03.705857 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgz4q" event={"ID":"3db031e9-bf69-4226-8ac1-cc5029be27d2","Type":"ContainerDied","Data":"5ecd23ed7e80e70a42836f28694c876cd7120920f143b7685fb28e03390ea264"} Jan 27 13:23:03 crc kubenswrapper[4745]: I0127 13:23:03.705881 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgz4q" event={"ID":"3db031e9-bf69-4226-8ac1-cc5029be27d2","Type":"ContainerStarted","Data":"ec58407fb9a1458ebffa6530f1ee01f56f0d784ac0a26b2b58e1ea709d419ef6"} Jan 27 13:23:04 crc kubenswrapper[4745]: I0127 13:23:04.169400 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qb9kp"] Jan 27 13:23:04 crc kubenswrapper[4745]: I0127 13:23:04.716109 4745 generic.go:334] "Generic (PLEG): container finished" podID="fd2510f8-8db6-46f9-802b-36b0ec0a84e4" containerID="481bc3f40bc53b11fed02d38cdf396475895fbe4c5b6da43180e5e87ce2904f2" exitCode=0 Jan 27 13:23:04 crc kubenswrapper[4745]: I0127 13:23:04.716181 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qb9kp" event={"ID":"fd2510f8-8db6-46f9-802b-36b0ec0a84e4","Type":"ContainerDied","Data":"481bc3f40bc53b11fed02d38cdf396475895fbe4c5b6da43180e5e87ce2904f2"} Jan 27 13:23:04 crc kubenswrapper[4745]: I0127 13:23:04.716263 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qb9kp" event={"ID":"fd2510f8-8db6-46f9-802b-36b0ec0a84e4","Type":"ContainerStarted","Data":"ee7ef72b998284ffa574d07add11440bc179fb97bb5bb8100dd0b5af2a340298"} Jan 27 13:23:04 crc kubenswrapper[4745]: I0127 13:23:04.718130 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgz4q" event={"ID":"3db031e9-bf69-4226-8ac1-cc5029be27d2","Type":"ContainerStarted","Data":"cef381d98bd0c59a3816ddb0e14dad1a2cb2c63eba4a320ab809b2f9fc353dd2"} Jan 27 13:23:05 crc kubenswrapper[4745]: I0127 13:23:05.725187 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-smgb6"] Jan 27 13:23:05 crc kubenswrapper[4745]: I0127 13:23:05.727159 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-smgb6" Jan 27 13:23:05 crc kubenswrapper[4745]: I0127 13:23:05.729694 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qb9kp" event={"ID":"fd2510f8-8db6-46f9-802b-36b0ec0a84e4","Type":"ContainerStarted","Data":"9e1b0db0f2f977f7bf564a3f65025fa4bda04b8ce7f2289ac21b9ca7b4f0c53a"} Jan 27 13:23:05 crc kubenswrapper[4745]: I0127 13:23:05.733459 4745 generic.go:334] "Generic (PLEG): container finished" podID="3db031e9-bf69-4226-8ac1-cc5029be27d2" containerID="cef381d98bd0c59a3816ddb0e14dad1a2cb2c63eba4a320ab809b2f9fc353dd2" exitCode=0 Jan 27 13:23:05 crc kubenswrapper[4745]: I0127 13:23:05.733512 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgz4q" event={"ID":"3db031e9-bf69-4226-8ac1-cc5029be27d2","Type":"ContainerDied","Data":"cef381d98bd0c59a3816ddb0e14dad1a2cb2c63eba4a320ab809b2f9fc353dd2"} Jan 27 13:23:05 crc kubenswrapper[4745]: I0127 13:23:05.742322 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-smgb6"] Jan 27 13:23:05 crc kubenswrapper[4745]: I0127 13:23:05.855726 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fffd2408-d3ee-474c-81db-61e26059b497-catalog-content\") pod \"redhat-operators-smgb6\" (UID: \"fffd2408-d3ee-474c-81db-61e26059b497\") " pod="openshift-marketplace/redhat-operators-smgb6" Jan 27 13:23:05 crc kubenswrapper[4745]: I0127 13:23:05.856036 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7wvc\" (UniqueName: \"kubernetes.io/projected/fffd2408-d3ee-474c-81db-61e26059b497-kube-api-access-l7wvc\") pod \"redhat-operators-smgb6\" (UID: \"fffd2408-d3ee-474c-81db-61e26059b497\") " pod="openshift-marketplace/redhat-operators-smgb6" Jan 27 13:23:05 crc kubenswrapper[4745]: I0127 13:23:05.856212 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fffd2408-d3ee-474c-81db-61e26059b497-utilities\") pod \"redhat-operators-smgb6\" (UID: \"fffd2408-d3ee-474c-81db-61e26059b497\") " pod="openshift-marketplace/redhat-operators-smgb6" Jan 27 13:23:05 crc kubenswrapper[4745]: I0127 13:23:05.957630 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fffd2408-d3ee-474c-81db-61e26059b497-utilities\") pod \"redhat-operators-smgb6\" (UID: \"fffd2408-d3ee-474c-81db-61e26059b497\") " pod="openshift-marketplace/redhat-operators-smgb6" Jan 27 13:23:05 crc kubenswrapper[4745]: I0127 13:23:05.957708 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fffd2408-d3ee-474c-81db-61e26059b497-catalog-content\") pod \"redhat-operators-smgb6\" (UID: \"fffd2408-d3ee-474c-81db-61e26059b497\") " pod="openshift-marketplace/redhat-operators-smgb6" Jan 27 13:23:05 crc kubenswrapper[4745]: I0127 13:23:05.957769 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7wvc\" (UniqueName: \"kubernetes.io/projected/fffd2408-d3ee-474c-81db-61e26059b497-kube-api-access-l7wvc\") pod \"redhat-operators-smgb6\" (UID: \"fffd2408-d3ee-474c-81db-61e26059b497\") " pod="openshift-marketplace/redhat-operators-smgb6" Jan 27 13:23:05 crc kubenswrapper[4745]: I0127 13:23:05.958551 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fffd2408-d3ee-474c-81db-61e26059b497-utilities\") pod \"redhat-operators-smgb6\" (UID: \"fffd2408-d3ee-474c-81db-61e26059b497\") " pod="openshift-marketplace/redhat-operators-smgb6" Jan 27 13:23:05 crc kubenswrapper[4745]: I0127 13:23:05.958843 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fffd2408-d3ee-474c-81db-61e26059b497-catalog-content\") pod \"redhat-operators-smgb6\" (UID: \"fffd2408-d3ee-474c-81db-61e26059b497\") " pod="openshift-marketplace/redhat-operators-smgb6" Jan 27 13:23:05 crc kubenswrapper[4745]: I0127 13:23:05.976701 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7wvc\" (UniqueName: \"kubernetes.io/projected/fffd2408-d3ee-474c-81db-61e26059b497-kube-api-access-l7wvc\") pod \"redhat-operators-smgb6\" (UID: \"fffd2408-d3ee-474c-81db-61e26059b497\") " pod="openshift-marketplace/redhat-operators-smgb6" Jan 27 13:23:06 crc kubenswrapper[4745]: I0127 13:23:06.051639 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-smgb6" Jan 27 13:23:06 crc kubenswrapper[4745]: I0127 13:23:06.475438 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-smgb6"] Jan 27 13:23:06 crc kubenswrapper[4745]: I0127 13:23:06.741374 4745 generic.go:334] "Generic (PLEG): container finished" podID="fffd2408-d3ee-474c-81db-61e26059b497" containerID="913628c0cd0f127ef30780ebc0336fef837b4eed17a7344069d48e19a2ca1492" exitCode=0 Jan 27 13:23:06 crc kubenswrapper[4745]: I0127 13:23:06.741459 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-smgb6" event={"ID":"fffd2408-d3ee-474c-81db-61e26059b497","Type":"ContainerDied","Data":"913628c0cd0f127ef30780ebc0336fef837b4eed17a7344069d48e19a2ca1492"} Jan 27 13:23:06 crc kubenswrapper[4745]: I0127 13:23:06.741725 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-smgb6" event={"ID":"fffd2408-d3ee-474c-81db-61e26059b497","Type":"ContainerStarted","Data":"a3f67f14bd9f74a7f4c63dc34e4e5326eb6835e0f641e9f0ea1be7109a9c9070"} Jan 27 13:23:06 crc kubenswrapper[4745]: I0127 13:23:06.744529 4745 generic.go:334] "Generic (PLEG): container finished" podID="fd2510f8-8db6-46f9-802b-36b0ec0a84e4" containerID="9e1b0db0f2f977f7bf564a3f65025fa4bda04b8ce7f2289ac21b9ca7b4f0c53a" exitCode=0 Jan 27 13:23:06 crc kubenswrapper[4745]: I0127 13:23:06.744596 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qb9kp" event={"ID":"fd2510f8-8db6-46f9-802b-36b0ec0a84e4","Type":"ContainerDied","Data":"9e1b0db0f2f977f7bf564a3f65025fa4bda04b8ce7f2289ac21b9ca7b4f0c53a"} Jan 27 13:23:06 crc kubenswrapper[4745]: I0127 13:23:06.747674 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgz4q" event={"ID":"3db031e9-bf69-4226-8ac1-cc5029be27d2","Type":"ContainerStarted","Data":"68ff6c5eda6a5b07bb7adf851707cd98dded53b7a904b9e06fe36ba5ed226bfa"} Jan 27 13:23:06 crc kubenswrapper[4745]: I0127 13:23:06.804170 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qgz4q" podStartSLOduration=2.277690988 podStartE2EDuration="4.804150279s" podCreationTimestamp="2026-01-27 13:23:02 +0000 UTC" firstStartedPulling="2026-01-27 13:23:03.707483149 +0000 UTC m=+4276.512393837" lastFinishedPulling="2026-01-27 13:23:06.23394243 +0000 UTC m=+4279.038853128" observedRunningTime="2026-01-27 13:23:06.800074462 +0000 UTC m=+4279.604985150" watchObservedRunningTime="2026-01-27 13:23:06.804150279 +0000 UTC m=+4279.609060957" Jan 27 13:23:07 crc kubenswrapper[4745]: I0127 13:23:07.755937 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-smgb6" event={"ID":"fffd2408-d3ee-474c-81db-61e26059b497","Type":"ContainerStarted","Data":"893bf1e2b68c4f14148543491ba237c05e07c78958643b2bef5a6df850f4a92e"} Jan 27 13:23:07 crc kubenswrapper[4745]: I0127 13:23:07.759075 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qb9kp" event={"ID":"fd2510f8-8db6-46f9-802b-36b0ec0a84e4","Type":"ContainerStarted","Data":"93ffbb9c81f37b1dde44304c8337a40b888656bb3dac9524741662ab86f79785"} Jan 27 13:23:07 crc kubenswrapper[4745]: I0127 13:23:07.810056 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qb9kp" podStartSLOduration=2.3730894 podStartE2EDuration="4.810035392s" podCreationTimestamp="2026-01-27 13:23:03 +0000 UTC" firstStartedPulling="2026-01-27 13:23:04.717884693 +0000 UTC m=+4277.522795381" lastFinishedPulling="2026-01-27 13:23:07.154830675 +0000 UTC m=+4279.959741373" observedRunningTime="2026-01-27 13:23:07.805158672 +0000 UTC m=+4280.610069370" watchObservedRunningTime="2026-01-27 13:23:07.810035392 +0000 UTC m=+4280.614946080" Jan 27 13:23:08 crc kubenswrapper[4745]: I0127 13:23:08.767733 4745 generic.go:334] "Generic (PLEG): container finished" podID="fffd2408-d3ee-474c-81db-61e26059b497" containerID="893bf1e2b68c4f14148543491ba237c05e07c78958643b2bef5a6df850f4a92e" exitCode=0 Jan 27 13:23:08 crc kubenswrapper[4745]: I0127 13:23:08.767802 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-smgb6" event={"ID":"fffd2408-d3ee-474c-81db-61e26059b497","Type":"ContainerDied","Data":"893bf1e2b68c4f14148543491ba237c05e07c78958643b2bef5a6df850f4a92e"} Jan 27 13:23:09 crc kubenswrapper[4745]: I0127 13:23:09.777321 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-smgb6" event={"ID":"fffd2408-d3ee-474c-81db-61e26059b497","Type":"ContainerStarted","Data":"401b497ba10267b0cf72b93c32119dffbfe38cfd9f97c39560fd6d246557faf5"} Jan 27 13:23:09 crc kubenswrapper[4745]: I0127 13:23:09.802645 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-smgb6" podStartSLOduration=2.348998643 podStartE2EDuration="4.802616232s" podCreationTimestamp="2026-01-27 13:23:05 +0000 UTC" firstStartedPulling="2026-01-27 13:23:06.742541609 +0000 UTC m=+4279.547452297" lastFinishedPulling="2026-01-27 13:23:09.196159198 +0000 UTC m=+4282.001069886" observedRunningTime="2026-01-27 13:23:09.797164836 +0000 UTC m=+4282.602075524" watchObservedRunningTime="2026-01-27 13:23:09.802616232 +0000 UTC m=+4282.607526930" Jan 27 13:23:13 crc kubenswrapper[4745]: I0127 13:23:13.065360 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qgz4q" Jan 27 13:23:13 crc kubenswrapper[4745]: I0127 13:23:13.065731 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qgz4q" Jan 27 13:23:13 crc kubenswrapper[4745]: I0127 13:23:13.115638 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qgz4q" Jan 27 13:23:13 crc kubenswrapper[4745]: I0127 13:23:13.656043 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qb9kp" Jan 27 13:23:13 crc kubenswrapper[4745]: I0127 13:23:13.656125 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qb9kp" Jan 27 13:23:13 crc kubenswrapper[4745]: I0127 13:23:13.734274 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qb9kp" Jan 27 13:23:13 crc kubenswrapper[4745]: I0127 13:23:13.870143 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qgz4q" Jan 27 13:23:13 crc kubenswrapper[4745]: I0127 13:23:13.870231 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qb9kp" Jan 27 13:23:15 crc kubenswrapper[4745]: I0127 13:23:15.120152 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qb9kp"] Jan 27 13:23:15 crc kubenswrapper[4745]: I0127 13:23:15.830074 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qb9kp" podUID="fd2510f8-8db6-46f9-802b-36b0ec0a84e4" containerName="registry-server" containerID="cri-o://93ffbb9c81f37b1dde44304c8337a40b888656bb3dac9524741662ab86f79785" gracePeriod=2 Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.052255 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-smgb6" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.052572 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-smgb6" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.126839 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgz4q"] Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.127183 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qgz4q" podUID="3db031e9-bf69-4226-8ac1-cc5029be27d2" containerName="registry-server" containerID="cri-o://68ff6c5eda6a5b07bb7adf851707cd98dded53b7a904b9e06fe36ba5ed226bfa" gracePeriod=2 Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.151609 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-smgb6" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.292727 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qb9kp" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.373484 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mv8j\" (UniqueName: \"kubernetes.io/projected/fd2510f8-8db6-46f9-802b-36b0ec0a84e4-kube-api-access-7mv8j\") pod \"fd2510f8-8db6-46f9-802b-36b0ec0a84e4\" (UID: \"fd2510f8-8db6-46f9-802b-36b0ec0a84e4\") " Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.373570 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd2510f8-8db6-46f9-802b-36b0ec0a84e4-utilities\") pod \"fd2510f8-8db6-46f9-802b-36b0ec0a84e4\" (UID: \"fd2510f8-8db6-46f9-802b-36b0ec0a84e4\") " Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.373659 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd2510f8-8db6-46f9-802b-36b0ec0a84e4-catalog-content\") pod \"fd2510f8-8db6-46f9-802b-36b0ec0a84e4\" (UID: \"fd2510f8-8db6-46f9-802b-36b0ec0a84e4\") " Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.375011 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd2510f8-8db6-46f9-802b-36b0ec0a84e4-utilities" (OuterVolumeSpecName: "utilities") pod "fd2510f8-8db6-46f9-802b-36b0ec0a84e4" (UID: "fd2510f8-8db6-46f9-802b-36b0ec0a84e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.384204 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd2510f8-8db6-46f9-802b-36b0ec0a84e4-kube-api-access-7mv8j" (OuterVolumeSpecName: "kube-api-access-7mv8j") pod "fd2510f8-8db6-46f9-802b-36b0ec0a84e4" (UID: "fd2510f8-8db6-46f9-802b-36b0ec0a84e4"). InnerVolumeSpecName "kube-api-access-7mv8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.431639 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd2510f8-8db6-46f9-802b-36b0ec0a84e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd2510f8-8db6-46f9-802b-36b0ec0a84e4" (UID: "fd2510f8-8db6-46f9-802b-36b0ec0a84e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.477053 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd2510f8-8db6-46f9-802b-36b0ec0a84e4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.477090 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mv8j\" (UniqueName: \"kubernetes.io/projected/fd2510f8-8db6-46f9-802b-36b0ec0a84e4-kube-api-access-7mv8j\") on node \"crc\" DevicePath \"\"" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.477102 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd2510f8-8db6-46f9-802b-36b0ec0a84e4-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.551783 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qgz4q" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.680617 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db031e9-bf69-4226-8ac1-cc5029be27d2-utilities\") pod \"3db031e9-bf69-4226-8ac1-cc5029be27d2\" (UID: \"3db031e9-bf69-4226-8ac1-cc5029be27d2\") " Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.680735 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db031e9-bf69-4226-8ac1-cc5029be27d2-catalog-content\") pod \"3db031e9-bf69-4226-8ac1-cc5029be27d2\" (UID: \"3db031e9-bf69-4226-8ac1-cc5029be27d2\") " Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.680876 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr4z5\" (UniqueName: \"kubernetes.io/projected/3db031e9-bf69-4226-8ac1-cc5029be27d2-kube-api-access-dr4z5\") pod \"3db031e9-bf69-4226-8ac1-cc5029be27d2\" (UID: \"3db031e9-bf69-4226-8ac1-cc5029be27d2\") " Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.682106 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3db031e9-bf69-4226-8ac1-cc5029be27d2-utilities" (OuterVolumeSpecName: "utilities") pod "3db031e9-bf69-4226-8ac1-cc5029be27d2" (UID: "3db031e9-bf69-4226-8ac1-cc5029be27d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.689366 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3db031e9-bf69-4226-8ac1-cc5029be27d2-kube-api-access-dr4z5" (OuterVolumeSpecName: "kube-api-access-dr4z5") pod "3db031e9-bf69-4226-8ac1-cc5029be27d2" (UID: "3db031e9-bf69-4226-8ac1-cc5029be27d2"). InnerVolumeSpecName "kube-api-access-dr4z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.715364 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3db031e9-bf69-4226-8ac1-cc5029be27d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3db031e9-bf69-4226-8ac1-cc5029be27d2" (UID: "3db031e9-bf69-4226-8ac1-cc5029be27d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.782395 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db031e9-bf69-4226-8ac1-cc5029be27d2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.782692 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr4z5\" (UniqueName: \"kubernetes.io/projected/3db031e9-bf69-4226-8ac1-cc5029be27d2-kube-api-access-dr4z5\") on node \"crc\" DevicePath \"\"" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.782704 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db031e9-bf69-4226-8ac1-cc5029be27d2-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.842031 4745 generic.go:334] "Generic (PLEG): container finished" podID="3db031e9-bf69-4226-8ac1-cc5029be27d2" containerID="68ff6c5eda6a5b07bb7adf851707cd98dded53b7a904b9e06fe36ba5ed226bfa" exitCode=0 Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.842113 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qgz4q" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.842124 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgz4q" event={"ID":"3db031e9-bf69-4226-8ac1-cc5029be27d2","Type":"ContainerDied","Data":"68ff6c5eda6a5b07bb7adf851707cd98dded53b7a904b9e06fe36ba5ed226bfa"} Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.842161 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgz4q" event={"ID":"3db031e9-bf69-4226-8ac1-cc5029be27d2","Type":"ContainerDied","Data":"ec58407fb9a1458ebffa6530f1ee01f56f0d784ac0a26b2b58e1ea709d419ef6"} Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.842187 4745 scope.go:117] "RemoveContainer" containerID="68ff6c5eda6a5b07bb7adf851707cd98dded53b7a904b9e06fe36ba5ed226bfa" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.847102 4745 generic.go:334] "Generic (PLEG): container finished" podID="fd2510f8-8db6-46f9-802b-36b0ec0a84e4" containerID="93ffbb9c81f37b1dde44304c8337a40b888656bb3dac9524741662ab86f79785" exitCode=0 Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.847139 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qb9kp" event={"ID":"fd2510f8-8db6-46f9-802b-36b0ec0a84e4","Type":"ContainerDied","Data":"93ffbb9c81f37b1dde44304c8337a40b888656bb3dac9524741662ab86f79785"} Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.847179 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qb9kp" event={"ID":"fd2510f8-8db6-46f9-802b-36b0ec0a84e4","Type":"ContainerDied","Data":"ee7ef72b998284ffa574d07add11440bc179fb97bb5bb8100dd0b5af2a340298"} Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.847231 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qb9kp" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.868216 4745 scope.go:117] "RemoveContainer" containerID="cef381d98bd0c59a3816ddb0e14dad1a2cb2c63eba4a320ab809b2f9fc353dd2" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.890106 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgz4q"] Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.897980 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgz4q"] Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.905080 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qb9kp"] Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.931735 4745 scope.go:117] "RemoveContainer" containerID="5ecd23ed7e80e70a42836f28694c876cd7120920f143b7685fb28e03390ea264" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.947288 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qb9kp"] Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.949059 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-smgb6" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.950598 4745 scope.go:117] "RemoveContainer" containerID="68ff6c5eda6a5b07bb7adf851707cd98dded53b7a904b9e06fe36ba5ed226bfa" Jan 27 13:23:16 crc kubenswrapper[4745]: E0127 13:23:16.950992 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68ff6c5eda6a5b07bb7adf851707cd98dded53b7a904b9e06fe36ba5ed226bfa\": container with ID starting with 68ff6c5eda6a5b07bb7adf851707cd98dded53b7a904b9e06fe36ba5ed226bfa not found: ID does not exist" containerID="68ff6c5eda6a5b07bb7adf851707cd98dded53b7a904b9e06fe36ba5ed226bfa" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.951029 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68ff6c5eda6a5b07bb7adf851707cd98dded53b7a904b9e06fe36ba5ed226bfa"} err="failed to get container status \"68ff6c5eda6a5b07bb7adf851707cd98dded53b7a904b9e06fe36ba5ed226bfa\": rpc error: code = NotFound desc = could not find container \"68ff6c5eda6a5b07bb7adf851707cd98dded53b7a904b9e06fe36ba5ed226bfa\": container with ID starting with 68ff6c5eda6a5b07bb7adf851707cd98dded53b7a904b9e06fe36ba5ed226bfa not found: ID does not exist" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.951054 4745 scope.go:117] "RemoveContainer" containerID="cef381d98bd0c59a3816ddb0e14dad1a2cb2c63eba4a320ab809b2f9fc353dd2" Jan 27 13:23:16 crc kubenswrapper[4745]: E0127 13:23:16.951527 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cef381d98bd0c59a3816ddb0e14dad1a2cb2c63eba4a320ab809b2f9fc353dd2\": container with ID starting with cef381d98bd0c59a3816ddb0e14dad1a2cb2c63eba4a320ab809b2f9fc353dd2 not found: ID does not exist" containerID="cef381d98bd0c59a3816ddb0e14dad1a2cb2c63eba4a320ab809b2f9fc353dd2" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.951555 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cef381d98bd0c59a3816ddb0e14dad1a2cb2c63eba4a320ab809b2f9fc353dd2"} err="failed to get container status \"cef381d98bd0c59a3816ddb0e14dad1a2cb2c63eba4a320ab809b2f9fc353dd2\": rpc error: code = NotFound desc = could not find container \"cef381d98bd0c59a3816ddb0e14dad1a2cb2c63eba4a320ab809b2f9fc353dd2\": container with ID starting with cef381d98bd0c59a3816ddb0e14dad1a2cb2c63eba4a320ab809b2f9fc353dd2 not found: ID does not exist" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.951571 4745 scope.go:117] "RemoveContainer" containerID="5ecd23ed7e80e70a42836f28694c876cd7120920f143b7685fb28e03390ea264" Jan 27 13:23:16 crc kubenswrapper[4745]: E0127 13:23:16.951802 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ecd23ed7e80e70a42836f28694c876cd7120920f143b7685fb28e03390ea264\": container with ID starting with 5ecd23ed7e80e70a42836f28694c876cd7120920f143b7685fb28e03390ea264 not found: ID does not exist" containerID="5ecd23ed7e80e70a42836f28694c876cd7120920f143b7685fb28e03390ea264" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.951839 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ecd23ed7e80e70a42836f28694c876cd7120920f143b7685fb28e03390ea264"} err="failed to get container status \"5ecd23ed7e80e70a42836f28694c876cd7120920f143b7685fb28e03390ea264\": rpc error: code = NotFound desc = could not find container \"5ecd23ed7e80e70a42836f28694c876cd7120920f143b7685fb28e03390ea264\": container with ID starting with 5ecd23ed7e80e70a42836f28694c876cd7120920f143b7685fb28e03390ea264 not found: ID does not exist" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.951857 4745 scope.go:117] "RemoveContainer" containerID="93ffbb9c81f37b1dde44304c8337a40b888656bb3dac9524741662ab86f79785" Jan 27 13:23:16 crc kubenswrapper[4745]: I0127 13:23:16.992716 4745 scope.go:117] "RemoveContainer" containerID="9e1b0db0f2f977f7bf564a3f65025fa4bda04b8ce7f2289ac21b9ca7b4f0c53a" Jan 27 13:23:17 crc kubenswrapper[4745]: I0127 13:23:17.014721 4745 scope.go:117] "RemoveContainer" containerID="481bc3f40bc53b11fed02d38cdf396475895fbe4c5b6da43180e5e87ce2904f2" Jan 27 13:23:17 crc kubenswrapper[4745]: I0127 13:23:17.036920 4745 scope.go:117] "RemoveContainer" containerID="93ffbb9c81f37b1dde44304c8337a40b888656bb3dac9524741662ab86f79785" Jan 27 13:23:17 crc kubenswrapper[4745]: E0127 13:23:17.037375 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93ffbb9c81f37b1dde44304c8337a40b888656bb3dac9524741662ab86f79785\": container with ID starting with 93ffbb9c81f37b1dde44304c8337a40b888656bb3dac9524741662ab86f79785 not found: ID does not exist" containerID="93ffbb9c81f37b1dde44304c8337a40b888656bb3dac9524741662ab86f79785" Jan 27 13:23:17 crc kubenswrapper[4745]: I0127 13:23:17.037439 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ffbb9c81f37b1dde44304c8337a40b888656bb3dac9524741662ab86f79785"} err="failed to get container status \"93ffbb9c81f37b1dde44304c8337a40b888656bb3dac9524741662ab86f79785\": rpc error: code = NotFound desc = could not find container \"93ffbb9c81f37b1dde44304c8337a40b888656bb3dac9524741662ab86f79785\": container with ID starting with 93ffbb9c81f37b1dde44304c8337a40b888656bb3dac9524741662ab86f79785 not found: ID does not exist" Jan 27 13:23:17 crc kubenswrapper[4745]: I0127 13:23:17.037480 4745 scope.go:117] "RemoveContainer" containerID="9e1b0db0f2f977f7bf564a3f65025fa4bda04b8ce7f2289ac21b9ca7b4f0c53a" Jan 27 13:23:17 crc kubenswrapper[4745]: E0127 13:23:17.038034 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e1b0db0f2f977f7bf564a3f65025fa4bda04b8ce7f2289ac21b9ca7b4f0c53a\": container with ID starting with 9e1b0db0f2f977f7bf564a3f65025fa4bda04b8ce7f2289ac21b9ca7b4f0c53a not found: ID does not exist" containerID="9e1b0db0f2f977f7bf564a3f65025fa4bda04b8ce7f2289ac21b9ca7b4f0c53a" Jan 27 13:23:17 crc kubenswrapper[4745]: I0127 13:23:17.038065 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e1b0db0f2f977f7bf564a3f65025fa4bda04b8ce7f2289ac21b9ca7b4f0c53a"} err="failed to get container status \"9e1b0db0f2f977f7bf564a3f65025fa4bda04b8ce7f2289ac21b9ca7b4f0c53a\": rpc error: code = NotFound desc = could not find container \"9e1b0db0f2f977f7bf564a3f65025fa4bda04b8ce7f2289ac21b9ca7b4f0c53a\": container with ID starting with 9e1b0db0f2f977f7bf564a3f65025fa4bda04b8ce7f2289ac21b9ca7b4f0c53a not found: ID does not exist" Jan 27 13:23:17 crc kubenswrapper[4745]: I0127 13:23:17.038103 4745 scope.go:117] "RemoveContainer" containerID="481bc3f40bc53b11fed02d38cdf396475895fbe4c5b6da43180e5e87ce2904f2" Jan 27 13:23:17 crc kubenswrapper[4745]: E0127 13:23:17.038627 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"481bc3f40bc53b11fed02d38cdf396475895fbe4c5b6da43180e5e87ce2904f2\": container with ID starting with 481bc3f40bc53b11fed02d38cdf396475895fbe4c5b6da43180e5e87ce2904f2 not found: ID does not exist" containerID="481bc3f40bc53b11fed02d38cdf396475895fbe4c5b6da43180e5e87ce2904f2" Jan 27 13:23:17 crc kubenswrapper[4745]: I0127 13:23:17.038755 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"481bc3f40bc53b11fed02d38cdf396475895fbe4c5b6da43180e5e87ce2904f2"} err="failed to get container status \"481bc3f40bc53b11fed02d38cdf396475895fbe4c5b6da43180e5e87ce2904f2\": rpc error: code = NotFound desc = could not find container \"481bc3f40bc53b11fed02d38cdf396475895fbe4c5b6da43180e5e87ce2904f2\": container with ID starting with 481bc3f40bc53b11fed02d38cdf396475895fbe4c5b6da43180e5e87ce2904f2 not found: ID does not exist" Jan 27 13:23:18 crc kubenswrapper[4745]: I0127 13:23:18.092639 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3db031e9-bf69-4226-8ac1-cc5029be27d2" path="/var/lib/kubelet/pods/3db031e9-bf69-4226-8ac1-cc5029be27d2/volumes" Jan 27 13:23:18 crc kubenswrapper[4745]: I0127 13:23:18.094011 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd2510f8-8db6-46f9-802b-36b0ec0a84e4" path="/var/lib/kubelet/pods/fd2510f8-8db6-46f9-802b-36b0ec0a84e4/volumes" Jan 27 13:23:19 crc kubenswrapper[4745]: I0127 13:23:19.520841 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-smgb6"] Jan 27 13:23:19 crc kubenswrapper[4745]: I0127 13:23:19.521044 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-smgb6" podUID="fffd2408-d3ee-474c-81db-61e26059b497" containerName="registry-server" containerID="cri-o://401b497ba10267b0cf72b93c32119dffbfe38cfd9f97c39560fd6d246557faf5" gracePeriod=2 Jan 27 13:23:19 crc kubenswrapper[4745]: I0127 13:23:19.878326 4745 generic.go:334] "Generic (PLEG): container finished" podID="fffd2408-d3ee-474c-81db-61e26059b497" containerID="401b497ba10267b0cf72b93c32119dffbfe38cfd9f97c39560fd6d246557faf5" exitCode=0 Jan 27 13:23:19 crc kubenswrapper[4745]: I0127 13:23:19.878450 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-smgb6" event={"ID":"fffd2408-d3ee-474c-81db-61e26059b497","Type":"ContainerDied","Data":"401b497ba10267b0cf72b93c32119dffbfe38cfd9f97c39560fd6d246557faf5"} Jan 27 13:23:19 crc kubenswrapper[4745]: I0127 13:23:19.944286 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-smgb6" Jan 27 13:23:20 crc kubenswrapper[4745]: I0127 13:23:20.026437 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fffd2408-d3ee-474c-81db-61e26059b497-catalog-content\") pod \"fffd2408-d3ee-474c-81db-61e26059b497\" (UID: \"fffd2408-d3ee-474c-81db-61e26059b497\") " Jan 27 13:23:20 crc kubenswrapper[4745]: I0127 13:23:20.026583 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7wvc\" (UniqueName: \"kubernetes.io/projected/fffd2408-d3ee-474c-81db-61e26059b497-kube-api-access-l7wvc\") pod \"fffd2408-d3ee-474c-81db-61e26059b497\" (UID: \"fffd2408-d3ee-474c-81db-61e26059b497\") " Jan 27 13:23:20 crc kubenswrapper[4745]: I0127 13:23:20.026646 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fffd2408-d3ee-474c-81db-61e26059b497-utilities\") pod \"fffd2408-d3ee-474c-81db-61e26059b497\" (UID: \"fffd2408-d3ee-474c-81db-61e26059b497\") " Jan 27 13:23:20 crc kubenswrapper[4745]: I0127 13:23:20.027834 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fffd2408-d3ee-474c-81db-61e26059b497-utilities" (OuterVolumeSpecName: "utilities") pod "fffd2408-d3ee-474c-81db-61e26059b497" (UID: "fffd2408-d3ee-474c-81db-61e26059b497"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:23:20 crc kubenswrapper[4745]: I0127 13:23:20.128699 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fffd2408-d3ee-474c-81db-61e26059b497-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:23:20 crc kubenswrapper[4745]: I0127 13:23:20.239369 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fffd2408-d3ee-474c-81db-61e26059b497-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fffd2408-d3ee-474c-81db-61e26059b497" (UID: "fffd2408-d3ee-474c-81db-61e26059b497"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:23:20 crc kubenswrapper[4745]: I0127 13:23:20.331121 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fffd2408-d3ee-474c-81db-61e26059b497-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:23:20 crc kubenswrapper[4745]: I0127 13:23:20.481327 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fffd2408-d3ee-474c-81db-61e26059b497-kube-api-access-l7wvc" (OuterVolumeSpecName: "kube-api-access-l7wvc") pod "fffd2408-d3ee-474c-81db-61e26059b497" (UID: "fffd2408-d3ee-474c-81db-61e26059b497"). InnerVolumeSpecName "kube-api-access-l7wvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:23:20 crc kubenswrapper[4745]: I0127 13:23:20.534880 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7wvc\" (UniqueName: \"kubernetes.io/projected/fffd2408-d3ee-474c-81db-61e26059b497-kube-api-access-l7wvc\") on node \"crc\" DevicePath \"\"" Jan 27 13:23:20 crc kubenswrapper[4745]: I0127 13:23:20.891748 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-smgb6" Jan 27 13:23:20 crc kubenswrapper[4745]: I0127 13:23:20.891602 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-smgb6" event={"ID":"fffd2408-d3ee-474c-81db-61e26059b497","Type":"ContainerDied","Data":"a3f67f14bd9f74a7f4c63dc34e4e5326eb6835e0f641e9f0ea1be7109a9c9070"} Jan 27 13:23:20 crc kubenswrapper[4745]: I0127 13:23:20.892255 4745 scope.go:117] "RemoveContainer" containerID="401b497ba10267b0cf72b93c32119dffbfe38cfd9f97c39560fd6d246557faf5" Jan 27 13:23:20 crc kubenswrapper[4745]: I0127 13:23:20.930200 4745 scope.go:117] "RemoveContainer" containerID="893bf1e2b68c4f14148543491ba237c05e07c78958643b2bef5a6df850f4a92e" Jan 27 13:23:20 crc kubenswrapper[4745]: I0127 13:23:20.961655 4745 scope.go:117] "RemoveContainer" containerID="913628c0cd0f127ef30780ebc0336fef837b4eed17a7344069d48e19a2ca1492" Jan 27 13:23:20 crc kubenswrapper[4745]: I0127 13:23:20.964725 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-smgb6"] Jan 27 13:23:20 crc kubenswrapper[4745]: I0127 13:23:20.989271 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-smgb6"] Jan 27 13:23:22 crc kubenswrapper[4745]: I0127 13:23:22.088745 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fffd2408-d3ee-474c-81db-61e26059b497" path="/var/lib/kubelet/pods/fffd2408-d3ee-474c-81db-61e26059b497/volumes" Jan 27 13:24:05 crc kubenswrapper[4745]: I0127 13:24:05.967252 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:24:05 crc kubenswrapper[4745]: I0127 13:24:05.968109 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:24:35 crc kubenswrapper[4745]: I0127 13:24:35.967729 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:24:35 crc kubenswrapper[4745]: I0127 13:24:35.968521 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:25:05 crc kubenswrapper[4745]: I0127 13:25:05.967252 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:25:05 crc kubenswrapper[4745]: I0127 13:25:05.967893 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:25:05 crc kubenswrapper[4745]: I0127 13:25:05.967942 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 13:25:05 crc kubenswrapper[4745]: I0127 13:25:05.968554 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"325ea61030e1d83d18ee20378ba80ae8f24adeecdff6874e78ec7f630de1a9a0"} pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:25:05 crc kubenswrapper[4745]: I0127 13:25:05.968616 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" containerID="cri-o://325ea61030e1d83d18ee20378ba80ae8f24adeecdff6874e78ec7f630de1a9a0" gracePeriod=600 Jan 27 13:25:06 crc kubenswrapper[4745]: I0127 13:25:06.733618 4745 generic.go:334] "Generic (PLEG): container finished" podID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerID="325ea61030e1d83d18ee20378ba80ae8f24adeecdff6874e78ec7f630de1a9a0" exitCode=0 Jan 27 13:25:06 crc kubenswrapper[4745]: I0127 13:25:06.733710 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerDied","Data":"325ea61030e1d83d18ee20378ba80ae8f24adeecdff6874e78ec7f630de1a9a0"} Jan 27 13:25:06 crc kubenswrapper[4745]: I0127 13:25:06.733997 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerStarted","Data":"988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb"} Jan 27 13:25:06 crc kubenswrapper[4745]: I0127 13:25:06.734026 4745 scope.go:117] "RemoveContainer" containerID="cff453225c3088eacdd5e489fadf162ebbb5fc48161ca3c900589db2e569628e" Jan 27 13:27:35 crc kubenswrapper[4745]: I0127 13:27:35.967657 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:27:35 crc kubenswrapper[4745]: I0127 13:27:35.968521 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:28:05 crc kubenswrapper[4745]: I0127 13:28:05.967633 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:28:05 crc kubenswrapper[4745]: I0127 13:28:05.968284 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:28:35 crc kubenswrapper[4745]: I0127 13:28:35.967345 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:28:35 crc kubenswrapper[4745]: I0127 13:28:35.967961 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:28:35 crc kubenswrapper[4745]: I0127 13:28:35.968006 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 13:28:35 crc kubenswrapper[4745]: I0127 13:28:35.968620 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb"} pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:28:35 crc kubenswrapper[4745]: I0127 13:28:35.968672 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" containerID="cri-o://988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" gracePeriod=600 Jan 27 13:28:36 crc kubenswrapper[4745]: E0127 13:28:36.117614 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:28:36 crc kubenswrapper[4745]: I0127 13:28:36.489238 4745 generic.go:334] "Generic (PLEG): container finished" podID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" exitCode=0 Jan 27 13:28:36 crc kubenswrapper[4745]: I0127 13:28:36.489317 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerDied","Data":"988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb"} Jan 27 13:28:36 crc kubenswrapper[4745]: I0127 13:28:36.489593 4745 scope.go:117] "RemoveContainer" containerID="325ea61030e1d83d18ee20378ba80ae8f24adeecdff6874e78ec7f630de1a9a0" Jan 27 13:28:36 crc kubenswrapper[4745]: I0127 13:28:36.490217 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:28:36 crc kubenswrapper[4745]: E0127 13:28:36.490437 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:28:49 crc kubenswrapper[4745]: I0127 13:28:49.074303 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:28:49 crc kubenswrapper[4745]: E0127 13:28:49.075024 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:29:01 crc kubenswrapper[4745]: I0127 13:29:01.073704 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:29:01 crc kubenswrapper[4745]: E0127 13:29:01.074572 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:29:12 crc kubenswrapper[4745]: I0127 13:29:12.073776 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:29:12 crc kubenswrapper[4745]: E0127 13:29:12.074574 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:29:25 crc kubenswrapper[4745]: I0127 13:29:25.074090 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:29:25 crc kubenswrapper[4745]: E0127 13:29:25.074836 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:29:37 crc kubenswrapper[4745]: I0127 13:29:37.073659 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:29:37 crc kubenswrapper[4745]: E0127 13:29:37.074516 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:29:51 crc kubenswrapper[4745]: I0127 13:29:51.075096 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:29:51 crc kubenswrapper[4745]: E0127 13:29:51.077101 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.165462 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492010-mwk8h"] Jan 27 13:30:00 crc kubenswrapper[4745]: E0127 13:30:00.166205 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fffd2408-d3ee-474c-81db-61e26059b497" containerName="registry-server" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.166232 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="fffd2408-d3ee-474c-81db-61e26059b497" containerName="registry-server" Jan 27 13:30:00 crc kubenswrapper[4745]: E0127 13:30:00.166245 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db031e9-bf69-4226-8ac1-cc5029be27d2" containerName="extract-utilities" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.166253 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db031e9-bf69-4226-8ac1-cc5029be27d2" containerName="extract-utilities" Jan 27 13:30:00 crc kubenswrapper[4745]: E0127 13:30:00.166273 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fffd2408-d3ee-474c-81db-61e26059b497" containerName="extract-utilities" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.166281 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="fffd2408-d3ee-474c-81db-61e26059b497" containerName="extract-utilities" Jan 27 13:30:00 crc kubenswrapper[4745]: E0127 13:30:00.166302 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db031e9-bf69-4226-8ac1-cc5029be27d2" containerName="registry-server" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.166309 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db031e9-bf69-4226-8ac1-cc5029be27d2" containerName="registry-server" Jan 27 13:30:00 crc kubenswrapper[4745]: E0127 13:30:00.166318 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd2510f8-8db6-46f9-802b-36b0ec0a84e4" containerName="extract-utilities" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.166325 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd2510f8-8db6-46f9-802b-36b0ec0a84e4" containerName="extract-utilities" Jan 27 13:30:00 crc kubenswrapper[4745]: E0127 13:30:00.166338 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db031e9-bf69-4226-8ac1-cc5029be27d2" containerName="extract-content" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.166344 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db031e9-bf69-4226-8ac1-cc5029be27d2" containerName="extract-content" Jan 27 13:30:00 crc kubenswrapper[4745]: E0127 13:30:00.166373 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd2510f8-8db6-46f9-802b-36b0ec0a84e4" containerName="extract-content" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.166380 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd2510f8-8db6-46f9-802b-36b0ec0a84e4" containerName="extract-content" Jan 27 13:30:00 crc kubenswrapper[4745]: E0127 13:30:00.166391 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd2510f8-8db6-46f9-802b-36b0ec0a84e4" containerName="registry-server" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.166398 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd2510f8-8db6-46f9-802b-36b0ec0a84e4" containerName="registry-server" Jan 27 13:30:00 crc kubenswrapper[4745]: E0127 13:30:00.166408 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fffd2408-d3ee-474c-81db-61e26059b497" containerName="extract-content" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.166415 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="fffd2408-d3ee-474c-81db-61e26059b497" containerName="extract-content" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.166568 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db031e9-bf69-4226-8ac1-cc5029be27d2" containerName="registry-server" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.166593 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="fffd2408-d3ee-474c-81db-61e26059b497" containerName="registry-server" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.166603 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd2510f8-8db6-46f9-802b-36b0ec0a84e4" containerName="registry-server" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.167199 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-mwk8h" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.172496 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492010-mwk8h"] Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.175307 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.175347 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.302501 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbbrj\" (UniqueName: \"kubernetes.io/projected/7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1-kube-api-access-wbbrj\") pod \"collect-profiles-29492010-mwk8h\" (UID: \"7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-mwk8h" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.302608 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1-secret-volume\") pod \"collect-profiles-29492010-mwk8h\" (UID: \"7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-mwk8h" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.302633 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1-config-volume\") pod \"collect-profiles-29492010-mwk8h\" (UID: \"7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-mwk8h" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.403248 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1-secret-volume\") pod \"collect-profiles-29492010-mwk8h\" (UID: \"7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-mwk8h" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.403296 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1-config-volume\") pod \"collect-profiles-29492010-mwk8h\" (UID: \"7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-mwk8h" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.403334 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbbrj\" (UniqueName: \"kubernetes.io/projected/7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1-kube-api-access-wbbrj\") pod \"collect-profiles-29492010-mwk8h\" (UID: \"7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-mwk8h" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.404282 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1-config-volume\") pod \"collect-profiles-29492010-mwk8h\" (UID: \"7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-mwk8h" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.410495 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1-secret-volume\") pod \"collect-profiles-29492010-mwk8h\" (UID: \"7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-mwk8h" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.421384 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbbrj\" (UniqueName: \"kubernetes.io/projected/7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1-kube-api-access-wbbrj\") pod \"collect-profiles-29492010-mwk8h\" (UID: \"7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-mwk8h" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.493547 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-mwk8h" Jan 27 13:30:00 crc kubenswrapper[4745]: I0127 13:30:00.948195 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492010-mwk8h"] Jan 27 13:30:01 crc kubenswrapper[4745]: I0127 13:30:01.113379 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-mwk8h" event={"ID":"7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1","Type":"ContainerStarted","Data":"aa5e5595d988f0b3ae8352379bf6cbddaf3b144a20433733db23959c6014b5c4"} Jan 27 13:30:01 crc kubenswrapper[4745]: I0127 13:30:01.113431 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-mwk8h" event={"ID":"7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1","Type":"ContainerStarted","Data":"ca93875a872e88713ef2270fb8adc1ea849f102bab42bf0727443e10fb77e4d1"} Jan 27 13:30:01 crc kubenswrapper[4745]: I0127 13:30:01.131216 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-mwk8h" podStartSLOduration=1.131196543 podStartE2EDuration="1.131196543s" podCreationTimestamp="2026-01-27 13:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:30:01.129622958 +0000 UTC m=+4693.934533646" watchObservedRunningTime="2026-01-27 13:30:01.131196543 +0000 UTC m=+4693.936107231" Jan 27 13:30:02 crc kubenswrapper[4745]: I0127 13:30:02.074130 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:30:02 crc kubenswrapper[4745]: E0127 13:30:02.076156 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:30:02 crc kubenswrapper[4745]: I0127 13:30:02.123492 4745 generic.go:334] "Generic (PLEG): container finished" podID="7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1" containerID="aa5e5595d988f0b3ae8352379bf6cbddaf3b144a20433733db23959c6014b5c4" exitCode=0 Jan 27 13:30:02 crc kubenswrapper[4745]: I0127 13:30:02.123579 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-mwk8h" event={"ID":"7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1","Type":"ContainerDied","Data":"aa5e5595d988f0b3ae8352379bf6cbddaf3b144a20433733db23959c6014b5c4"} Jan 27 13:30:03 crc kubenswrapper[4745]: I0127 13:30:03.392424 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-mwk8h" Jan 27 13:30:03 crc kubenswrapper[4745]: I0127 13:30:03.499648 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1-secret-volume\") pod \"7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1\" (UID: \"7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1\") " Jan 27 13:30:03 crc kubenswrapper[4745]: I0127 13:30:03.499800 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1-config-volume\") pod \"7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1\" (UID: \"7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1\") " Jan 27 13:30:03 crc kubenswrapper[4745]: I0127 13:30:03.499910 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbbrj\" (UniqueName: \"kubernetes.io/projected/7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1-kube-api-access-wbbrj\") pod \"7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1\" (UID: \"7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1\") " Jan 27 13:30:03 crc kubenswrapper[4745]: I0127 13:30:03.500409 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1-config-volume" (OuterVolumeSpecName: "config-volume") pod "7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1" (UID: "7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:30:03 crc kubenswrapper[4745]: I0127 13:30:03.504635 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1" (UID: "7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:30:03 crc kubenswrapper[4745]: I0127 13:30:03.504768 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1-kube-api-access-wbbrj" (OuterVolumeSpecName: "kube-api-access-wbbrj") pod "7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1" (UID: "7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1"). InnerVolumeSpecName "kube-api-access-wbbrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:30:03 crc kubenswrapper[4745]: I0127 13:30:03.601398 4745 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:03 crc kubenswrapper[4745]: I0127 13:30:03.601460 4745 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:03 crc kubenswrapper[4745]: I0127 13:30:03.601483 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbbrj\" (UniqueName: \"kubernetes.io/projected/7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1-kube-api-access-wbbrj\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:04 crc kubenswrapper[4745]: I0127 13:30:04.136582 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-mwk8h" event={"ID":"7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1","Type":"ContainerDied","Data":"ca93875a872e88713ef2270fb8adc1ea849f102bab42bf0727443e10fb77e4d1"} Jan 27 13:30:04 crc kubenswrapper[4745]: I0127 13:30:04.136616 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca93875a872e88713ef2270fb8adc1ea849f102bab42bf0727443e10fb77e4d1" Jan 27 13:30:04 crc kubenswrapper[4745]: I0127 13:30:04.136665 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-mwk8h" Jan 27 13:30:04 crc kubenswrapper[4745]: I0127 13:30:04.460692 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491965-kjxlw"] Jan 27 13:30:04 crc kubenswrapper[4745]: I0127 13:30:04.466651 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491965-kjxlw"] Jan 27 13:30:06 crc kubenswrapper[4745]: I0127 13:30:06.086438 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff4aa978-b605-46dc-9603-96638efd0c73" path="/var/lib/kubelet/pods/ff4aa978-b605-46dc-9603-96638efd0c73/volumes" Jan 27 13:30:15 crc kubenswrapper[4745]: I0127 13:30:15.074160 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:30:15 crc kubenswrapper[4745]: E0127 13:30:15.074985 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:30:30 crc kubenswrapper[4745]: I0127 13:30:30.073901 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:30:30 crc kubenswrapper[4745]: E0127 13:30:30.074640 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:30:43 crc kubenswrapper[4745]: I0127 13:30:43.074339 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:30:43 crc kubenswrapper[4745]: E0127 13:30:43.075197 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:30:57 crc kubenswrapper[4745]: I0127 13:30:57.073419 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:30:57 crc kubenswrapper[4745]: E0127 13:30:57.074247 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:30:57 crc kubenswrapper[4745]: I0127 13:30:57.296879 4745 scope.go:117] "RemoveContainer" containerID="b5b3be6f4707d4160e8100e11c91bbb54266b6f7a996db256f03f748d6b112c4" Jan 27 13:31:10 crc kubenswrapper[4745]: I0127 13:31:10.074318 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:31:10 crc kubenswrapper[4745]: E0127 13:31:10.075154 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:31:22 crc kubenswrapper[4745]: I0127 13:31:22.074493 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:31:22 crc kubenswrapper[4745]: E0127 13:31:22.075407 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:31:36 crc kubenswrapper[4745]: I0127 13:31:36.074871 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:31:36 crc kubenswrapper[4745]: E0127 13:31:36.076431 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:31:48 crc kubenswrapper[4745]: I0127 13:31:48.079102 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:31:48 crc kubenswrapper[4745]: E0127 13:31:48.079635 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:32:02 crc kubenswrapper[4745]: I0127 13:32:02.073767 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:32:02 crc kubenswrapper[4745]: E0127 13:32:02.074648 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:32:13 crc kubenswrapper[4745]: I0127 13:32:13.074499 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:32:13 crc kubenswrapper[4745]: E0127 13:32:13.075440 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:32:24 crc kubenswrapper[4745]: I0127 13:32:24.077328 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:32:24 crc kubenswrapper[4745]: E0127 13:32:24.078296 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:32:38 crc kubenswrapper[4745]: I0127 13:32:38.080544 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:32:38 crc kubenswrapper[4745]: E0127 13:32:38.081408 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:32:50 crc kubenswrapper[4745]: I0127 13:32:50.074314 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:32:50 crc kubenswrapper[4745]: E0127 13:32:50.075226 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:33:05 crc kubenswrapper[4745]: I0127 13:33:05.074062 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:33:05 crc kubenswrapper[4745]: E0127 13:33:05.075048 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:33:05 crc kubenswrapper[4745]: I0127 13:33:05.799087 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4q5vw"] Jan 27 13:33:05 crc kubenswrapper[4745]: E0127 13:33:05.801473 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1" containerName="collect-profiles" Jan 27 13:33:05 crc kubenswrapper[4745]: I0127 13:33:05.801527 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1" containerName="collect-profiles" Jan 27 13:33:05 crc kubenswrapper[4745]: I0127 13:33:05.801772 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e8c6575-7c5e-4ff0-8a5a-cd2d8d4bc8b1" containerName="collect-profiles" Jan 27 13:33:05 crc kubenswrapper[4745]: I0127 13:33:05.803102 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4q5vw" Jan 27 13:33:05 crc kubenswrapper[4745]: I0127 13:33:05.806110 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4q5vw"] Jan 27 13:33:05 crc kubenswrapper[4745]: I0127 13:33:05.816746 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b1242d-ec83-4442-a05a-d51a3849cdc2-catalog-content\") pod \"certified-operators-4q5vw\" (UID: \"f3b1242d-ec83-4442-a05a-d51a3849cdc2\") " pod="openshift-marketplace/certified-operators-4q5vw" Jan 27 13:33:05 crc kubenswrapper[4745]: I0127 13:33:05.816901 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrfjv\" (UniqueName: \"kubernetes.io/projected/f3b1242d-ec83-4442-a05a-d51a3849cdc2-kube-api-access-zrfjv\") pod \"certified-operators-4q5vw\" (UID: \"f3b1242d-ec83-4442-a05a-d51a3849cdc2\") " pod="openshift-marketplace/certified-operators-4q5vw" Jan 27 13:33:05 crc kubenswrapper[4745]: I0127 13:33:05.816968 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b1242d-ec83-4442-a05a-d51a3849cdc2-utilities\") pod \"certified-operators-4q5vw\" (UID: \"f3b1242d-ec83-4442-a05a-d51a3849cdc2\") " pod="openshift-marketplace/certified-operators-4q5vw" Jan 27 13:33:05 crc kubenswrapper[4745]: I0127 13:33:05.918256 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b1242d-ec83-4442-a05a-d51a3849cdc2-catalog-content\") pod \"certified-operators-4q5vw\" (UID: \"f3b1242d-ec83-4442-a05a-d51a3849cdc2\") " pod="openshift-marketplace/certified-operators-4q5vw" Jan 27 13:33:05 crc kubenswrapper[4745]: I0127 13:33:05.918353 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrfjv\" (UniqueName: \"kubernetes.io/projected/f3b1242d-ec83-4442-a05a-d51a3849cdc2-kube-api-access-zrfjv\") pod \"certified-operators-4q5vw\" (UID: \"f3b1242d-ec83-4442-a05a-d51a3849cdc2\") " pod="openshift-marketplace/certified-operators-4q5vw" Jan 27 13:33:05 crc kubenswrapper[4745]: I0127 13:33:05.918392 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b1242d-ec83-4442-a05a-d51a3849cdc2-utilities\") pod \"certified-operators-4q5vw\" (UID: \"f3b1242d-ec83-4442-a05a-d51a3849cdc2\") " pod="openshift-marketplace/certified-operators-4q5vw" Jan 27 13:33:05 crc kubenswrapper[4745]: I0127 13:33:05.918938 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b1242d-ec83-4442-a05a-d51a3849cdc2-utilities\") pod \"certified-operators-4q5vw\" (UID: \"f3b1242d-ec83-4442-a05a-d51a3849cdc2\") " pod="openshift-marketplace/certified-operators-4q5vw" Jan 27 13:33:05 crc kubenswrapper[4745]: I0127 13:33:05.919228 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b1242d-ec83-4442-a05a-d51a3849cdc2-catalog-content\") pod \"certified-operators-4q5vw\" (UID: \"f3b1242d-ec83-4442-a05a-d51a3849cdc2\") " pod="openshift-marketplace/certified-operators-4q5vw" Jan 27 13:33:05 crc kubenswrapper[4745]: I0127 13:33:05.939189 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrfjv\" (UniqueName: \"kubernetes.io/projected/f3b1242d-ec83-4442-a05a-d51a3849cdc2-kube-api-access-zrfjv\") pod \"certified-operators-4q5vw\" (UID: \"f3b1242d-ec83-4442-a05a-d51a3849cdc2\") " pod="openshift-marketplace/certified-operators-4q5vw" Jan 27 13:33:06 crc kubenswrapper[4745]: I0127 13:33:06.132000 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4q5vw" Jan 27 13:33:06 crc kubenswrapper[4745]: I0127 13:33:06.716450 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4q5vw"] Jan 27 13:33:07 crc kubenswrapper[4745]: I0127 13:33:07.659378 4745 generic.go:334] "Generic (PLEG): container finished" podID="f3b1242d-ec83-4442-a05a-d51a3849cdc2" containerID="64d5514dd32c04bf9469a271a681596214823a0e6fa93bfe94866707a656fbf3" exitCode=0 Jan 27 13:33:07 crc kubenswrapper[4745]: I0127 13:33:07.659456 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q5vw" event={"ID":"f3b1242d-ec83-4442-a05a-d51a3849cdc2","Type":"ContainerDied","Data":"64d5514dd32c04bf9469a271a681596214823a0e6fa93bfe94866707a656fbf3"} Jan 27 13:33:07 crc kubenswrapper[4745]: I0127 13:33:07.659755 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q5vw" event={"ID":"f3b1242d-ec83-4442-a05a-d51a3849cdc2","Type":"ContainerStarted","Data":"8c5528b245aa421ab7165553c8a8254c432c112768dbc429e2fa32f677320c1d"} Jan 27 13:33:07 crc kubenswrapper[4745]: I0127 13:33:07.663130 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 13:33:12 crc kubenswrapper[4745]: I0127 13:33:12.715176 4745 generic.go:334] "Generic (PLEG): container finished" podID="f3b1242d-ec83-4442-a05a-d51a3849cdc2" containerID="4d2605bcafe55c0ea34d8c1aa540d090bae2cd2fc0985236d5207a6b63fb5384" exitCode=0 Jan 27 13:33:12 crc kubenswrapper[4745]: I0127 13:33:12.715988 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q5vw" event={"ID":"f3b1242d-ec83-4442-a05a-d51a3849cdc2","Type":"ContainerDied","Data":"4d2605bcafe55c0ea34d8c1aa540d090bae2cd2fc0985236d5207a6b63fb5384"} Jan 27 13:33:13 crc kubenswrapper[4745]: I0127 13:33:13.725141 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q5vw" event={"ID":"f3b1242d-ec83-4442-a05a-d51a3849cdc2","Type":"ContainerStarted","Data":"af2b65a25c13b6f51cf576d561de0eacaca3deff086a2d8c308cd78f9c8c3874"} Jan 27 13:33:13 crc kubenswrapper[4745]: I0127 13:33:13.748896 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4q5vw" podStartSLOduration=3.266098445 podStartE2EDuration="8.748877659s" podCreationTimestamp="2026-01-27 13:33:05 +0000 UTC" firstStartedPulling="2026-01-27 13:33:07.662295094 +0000 UTC m=+4880.467205782" lastFinishedPulling="2026-01-27 13:33:13.145074288 +0000 UTC m=+4885.949984996" observedRunningTime="2026-01-27 13:33:13.743717413 +0000 UTC m=+4886.548628141" watchObservedRunningTime="2026-01-27 13:33:13.748877659 +0000 UTC m=+4886.553788347" Jan 27 13:33:16 crc kubenswrapper[4745]: I0127 13:33:16.132740 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4q5vw" Jan 27 13:33:16 crc kubenswrapper[4745]: I0127 13:33:16.133178 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4q5vw" Jan 27 13:33:16 crc kubenswrapper[4745]: I0127 13:33:16.180775 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4q5vw" Jan 27 13:33:17 crc kubenswrapper[4745]: I0127 13:33:17.074034 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:33:17 crc kubenswrapper[4745]: E0127 13:33:17.074558 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:33:18 crc kubenswrapper[4745]: I0127 13:33:18.637321 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mllvm"] Jan 27 13:33:18 crc kubenswrapper[4745]: I0127 13:33:18.639328 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mllvm" Jan 27 13:33:18 crc kubenswrapper[4745]: I0127 13:33:18.643908 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mllvm"] Jan 27 13:33:18 crc kubenswrapper[4745]: I0127 13:33:18.826464 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gdrkb"] Jan 27 13:33:18 crc kubenswrapper[4745]: I0127 13:33:18.827955 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdrkb" Jan 27 13:33:18 crc kubenswrapper[4745]: I0127 13:33:18.828542 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqdfn\" (UniqueName: \"kubernetes.io/projected/169ac50a-fca6-42ee-b578-3f93629fcd3c-kube-api-access-lqdfn\") pod \"redhat-marketplace-mllvm\" (UID: \"169ac50a-fca6-42ee-b578-3f93629fcd3c\") " pod="openshift-marketplace/redhat-marketplace-mllvm" Jan 27 13:33:18 crc kubenswrapper[4745]: I0127 13:33:18.828662 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/169ac50a-fca6-42ee-b578-3f93629fcd3c-utilities\") pod \"redhat-marketplace-mllvm\" (UID: \"169ac50a-fca6-42ee-b578-3f93629fcd3c\") " pod="openshift-marketplace/redhat-marketplace-mllvm" Jan 27 13:33:18 crc kubenswrapper[4745]: I0127 13:33:18.828690 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/169ac50a-fca6-42ee-b578-3f93629fcd3c-catalog-content\") pod \"redhat-marketplace-mllvm\" (UID: \"169ac50a-fca6-42ee-b578-3f93629fcd3c\") " pod="openshift-marketplace/redhat-marketplace-mllvm" Jan 27 13:33:18 crc kubenswrapper[4745]: I0127 13:33:18.842244 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gdrkb"] Jan 27 13:33:18 crc kubenswrapper[4745]: I0127 13:33:18.929782 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/169ac50a-fca6-42ee-b578-3f93629fcd3c-utilities\") pod \"redhat-marketplace-mllvm\" (UID: \"169ac50a-fca6-42ee-b578-3f93629fcd3c\") " pod="openshift-marketplace/redhat-marketplace-mllvm" Jan 27 13:33:18 crc kubenswrapper[4745]: I0127 13:33:18.929859 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/169ac50a-fca6-42ee-b578-3f93629fcd3c-catalog-content\") pod \"redhat-marketplace-mllvm\" (UID: \"169ac50a-fca6-42ee-b578-3f93629fcd3c\") " pod="openshift-marketplace/redhat-marketplace-mllvm" Jan 27 13:33:18 crc kubenswrapper[4745]: I0127 13:33:18.929890 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x76cb\" (UniqueName: \"kubernetes.io/projected/ee79f6d7-5c70-4724-b3df-551ab392aa8d-kube-api-access-x76cb\") pod \"community-operators-gdrkb\" (UID: \"ee79f6d7-5c70-4724-b3df-551ab392aa8d\") " pod="openshift-marketplace/community-operators-gdrkb" Jan 27 13:33:18 crc kubenswrapper[4745]: I0127 13:33:18.929942 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqdfn\" (UniqueName: \"kubernetes.io/projected/169ac50a-fca6-42ee-b578-3f93629fcd3c-kube-api-access-lqdfn\") pod \"redhat-marketplace-mllvm\" (UID: \"169ac50a-fca6-42ee-b578-3f93629fcd3c\") " pod="openshift-marketplace/redhat-marketplace-mllvm" Jan 27 13:33:18 crc kubenswrapper[4745]: I0127 13:33:18.929993 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee79f6d7-5c70-4724-b3df-551ab392aa8d-utilities\") pod \"community-operators-gdrkb\" (UID: \"ee79f6d7-5c70-4724-b3df-551ab392aa8d\") " pod="openshift-marketplace/community-operators-gdrkb" Jan 27 13:33:18 crc kubenswrapper[4745]: I0127 13:33:18.930018 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee79f6d7-5c70-4724-b3df-551ab392aa8d-catalog-content\") pod \"community-operators-gdrkb\" (UID: \"ee79f6d7-5c70-4724-b3df-551ab392aa8d\") " pod="openshift-marketplace/community-operators-gdrkb" Jan 27 13:33:18 crc kubenswrapper[4745]: I0127 13:33:18.930656 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/169ac50a-fca6-42ee-b578-3f93629fcd3c-catalog-content\") pod \"redhat-marketplace-mllvm\" (UID: \"169ac50a-fca6-42ee-b578-3f93629fcd3c\") " pod="openshift-marketplace/redhat-marketplace-mllvm" Jan 27 13:33:18 crc kubenswrapper[4745]: I0127 13:33:18.930656 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/169ac50a-fca6-42ee-b578-3f93629fcd3c-utilities\") pod \"redhat-marketplace-mllvm\" (UID: \"169ac50a-fca6-42ee-b578-3f93629fcd3c\") " pod="openshift-marketplace/redhat-marketplace-mllvm" Jan 27 13:33:18 crc kubenswrapper[4745]: I0127 13:33:18.953874 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqdfn\" (UniqueName: \"kubernetes.io/projected/169ac50a-fca6-42ee-b578-3f93629fcd3c-kube-api-access-lqdfn\") pod \"redhat-marketplace-mllvm\" (UID: \"169ac50a-fca6-42ee-b578-3f93629fcd3c\") " pod="openshift-marketplace/redhat-marketplace-mllvm" Jan 27 13:33:18 crc kubenswrapper[4745]: I0127 13:33:18.971127 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mllvm" Jan 27 13:33:19 crc kubenswrapper[4745]: I0127 13:33:19.031062 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee79f6d7-5c70-4724-b3df-551ab392aa8d-utilities\") pod \"community-operators-gdrkb\" (UID: \"ee79f6d7-5c70-4724-b3df-551ab392aa8d\") " pod="openshift-marketplace/community-operators-gdrkb" Jan 27 13:33:19 crc kubenswrapper[4745]: I0127 13:33:19.031117 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee79f6d7-5c70-4724-b3df-551ab392aa8d-catalog-content\") pod \"community-operators-gdrkb\" (UID: \"ee79f6d7-5c70-4724-b3df-551ab392aa8d\") " pod="openshift-marketplace/community-operators-gdrkb" Jan 27 13:33:19 crc kubenswrapper[4745]: I0127 13:33:19.031227 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x76cb\" (UniqueName: \"kubernetes.io/projected/ee79f6d7-5c70-4724-b3df-551ab392aa8d-kube-api-access-x76cb\") pod \"community-operators-gdrkb\" (UID: \"ee79f6d7-5c70-4724-b3df-551ab392aa8d\") " pod="openshift-marketplace/community-operators-gdrkb" Jan 27 13:33:19 crc kubenswrapper[4745]: I0127 13:33:19.032073 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee79f6d7-5c70-4724-b3df-551ab392aa8d-utilities\") pod \"community-operators-gdrkb\" (UID: \"ee79f6d7-5c70-4724-b3df-551ab392aa8d\") " pod="openshift-marketplace/community-operators-gdrkb" Jan 27 13:33:19 crc kubenswrapper[4745]: I0127 13:33:19.032303 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee79f6d7-5c70-4724-b3df-551ab392aa8d-catalog-content\") pod \"community-operators-gdrkb\" (UID: \"ee79f6d7-5c70-4724-b3df-551ab392aa8d\") " pod="openshift-marketplace/community-operators-gdrkb" Jan 27 13:33:19 crc kubenswrapper[4745]: I0127 13:33:19.055692 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x76cb\" (UniqueName: \"kubernetes.io/projected/ee79f6d7-5c70-4724-b3df-551ab392aa8d-kube-api-access-x76cb\") pod \"community-operators-gdrkb\" (UID: \"ee79f6d7-5c70-4724-b3df-551ab392aa8d\") " pod="openshift-marketplace/community-operators-gdrkb" Jan 27 13:33:19 crc kubenswrapper[4745]: I0127 13:33:19.153555 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdrkb" Jan 27 13:33:19 crc kubenswrapper[4745]: I0127 13:33:19.503618 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mllvm"] Jan 27 13:33:19 crc kubenswrapper[4745]: I0127 13:33:19.667728 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gdrkb"] Jan 27 13:33:19 crc kubenswrapper[4745]: W0127 13:33:19.669284 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee79f6d7_5c70_4724_b3df_551ab392aa8d.slice/crio-962a5326154decf4dbf1fc431f4f74bb5ea1c389af3cb0de6953fc8ff4078c7c WatchSource:0}: Error finding container 962a5326154decf4dbf1fc431f4f74bb5ea1c389af3cb0de6953fc8ff4078c7c: Status 404 returned error can't find the container with id 962a5326154decf4dbf1fc431f4f74bb5ea1c389af3cb0de6953fc8ff4078c7c Jan 27 13:33:19 crc kubenswrapper[4745]: I0127 13:33:19.785633 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdrkb" event={"ID":"ee79f6d7-5c70-4724-b3df-551ab392aa8d","Type":"ContainerStarted","Data":"962a5326154decf4dbf1fc431f4f74bb5ea1c389af3cb0de6953fc8ff4078c7c"} Jan 27 13:33:19 crc kubenswrapper[4745]: I0127 13:33:19.788345 4745 generic.go:334] "Generic (PLEG): container finished" podID="169ac50a-fca6-42ee-b578-3f93629fcd3c" containerID="c49c3d845526d6c6b46f58bff24e6c846e83cd68be104c314ce2e1f7873455d4" exitCode=0 Jan 27 13:33:19 crc kubenswrapper[4745]: I0127 13:33:19.788393 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mllvm" event={"ID":"169ac50a-fca6-42ee-b578-3f93629fcd3c","Type":"ContainerDied","Data":"c49c3d845526d6c6b46f58bff24e6c846e83cd68be104c314ce2e1f7873455d4"} Jan 27 13:33:19 crc kubenswrapper[4745]: I0127 13:33:19.788422 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mllvm" event={"ID":"169ac50a-fca6-42ee-b578-3f93629fcd3c","Type":"ContainerStarted","Data":"74ae425b86e27dfe9ee86825d682c2dbd2137ff1a2133b9cf48601594b7da361"} Jan 27 13:33:20 crc kubenswrapper[4745]: I0127 13:33:20.798110 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mllvm" event={"ID":"169ac50a-fca6-42ee-b578-3f93629fcd3c","Type":"ContainerStarted","Data":"2cf2bfeebc90e7afc16551c9cc6aa8f9718a24afc2db639bd4bc3866db143ee2"} Jan 27 13:33:20 crc kubenswrapper[4745]: I0127 13:33:20.800557 4745 generic.go:334] "Generic (PLEG): container finished" podID="ee79f6d7-5c70-4724-b3df-551ab392aa8d" containerID="bc22b56483b24ee888d372f4020db8499dd13601a1e5291e005793b4d631df46" exitCode=0 Jan 27 13:33:20 crc kubenswrapper[4745]: I0127 13:33:20.800631 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdrkb" event={"ID":"ee79f6d7-5c70-4724-b3df-551ab392aa8d","Type":"ContainerDied","Data":"bc22b56483b24ee888d372f4020db8499dd13601a1e5291e005793b4d631df46"} Jan 27 13:33:21 crc kubenswrapper[4745]: I0127 13:33:21.815882 4745 generic.go:334] "Generic (PLEG): container finished" podID="169ac50a-fca6-42ee-b578-3f93629fcd3c" containerID="2cf2bfeebc90e7afc16551c9cc6aa8f9718a24afc2db639bd4bc3866db143ee2" exitCode=0 Jan 27 13:33:21 crc kubenswrapper[4745]: I0127 13:33:21.816019 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mllvm" event={"ID":"169ac50a-fca6-42ee-b578-3f93629fcd3c","Type":"ContainerDied","Data":"2cf2bfeebc90e7afc16551c9cc6aa8f9718a24afc2db639bd4bc3866db143ee2"} Jan 27 13:33:21 crc kubenswrapper[4745]: I0127 13:33:21.825690 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdrkb" event={"ID":"ee79f6d7-5c70-4724-b3df-551ab392aa8d","Type":"ContainerStarted","Data":"31e68d3374f8fa846f58622e132aab6bcb9ae48b46581f35a6bc1f88e1df0df7"} Jan 27 13:33:21 crc kubenswrapper[4745]: I0127 13:33:21.831430 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q45jc"] Jan 27 13:33:21 crc kubenswrapper[4745]: I0127 13:33:21.833692 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q45jc" Jan 27 13:33:21 crc kubenswrapper[4745]: I0127 13:33:21.842070 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q45jc"] Jan 27 13:33:21 crc kubenswrapper[4745]: I0127 13:33:21.977496 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e07dc175-1fe7-41f8-b56a-a3f0d29c2236-utilities\") pod \"redhat-operators-q45jc\" (UID: \"e07dc175-1fe7-41f8-b56a-a3f0d29c2236\") " pod="openshift-marketplace/redhat-operators-q45jc" Jan 27 13:33:21 crc kubenswrapper[4745]: I0127 13:33:21.977683 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e07dc175-1fe7-41f8-b56a-a3f0d29c2236-catalog-content\") pod \"redhat-operators-q45jc\" (UID: \"e07dc175-1fe7-41f8-b56a-a3f0d29c2236\") " pod="openshift-marketplace/redhat-operators-q45jc" Jan 27 13:33:21 crc kubenswrapper[4745]: I0127 13:33:21.977743 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvz7t\" (UniqueName: \"kubernetes.io/projected/e07dc175-1fe7-41f8-b56a-a3f0d29c2236-kube-api-access-nvz7t\") pod \"redhat-operators-q45jc\" (UID: \"e07dc175-1fe7-41f8-b56a-a3f0d29c2236\") " pod="openshift-marketplace/redhat-operators-q45jc" Jan 27 13:33:22 crc kubenswrapper[4745]: I0127 13:33:22.078573 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e07dc175-1fe7-41f8-b56a-a3f0d29c2236-catalog-content\") pod \"redhat-operators-q45jc\" (UID: \"e07dc175-1fe7-41f8-b56a-a3f0d29c2236\") " pod="openshift-marketplace/redhat-operators-q45jc" Jan 27 13:33:22 crc kubenswrapper[4745]: I0127 13:33:22.078921 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvz7t\" (UniqueName: \"kubernetes.io/projected/e07dc175-1fe7-41f8-b56a-a3f0d29c2236-kube-api-access-nvz7t\") pod \"redhat-operators-q45jc\" (UID: \"e07dc175-1fe7-41f8-b56a-a3f0d29c2236\") " pod="openshift-marketplace/redhat-operators-q45jc" Jan 27 13:33:22 crc kubenswrapper[4745]: I0127 13:33:22.079052 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e07dc175-1fe7-41f8-b56a-a3f0d29c2236-utilities\") pod \"redhat-operators-q45jc\" (UID: \"e07dc175-1fe7-41f8-b56a-a3f0d29c2236\") " pod="openshift-marketplace/redhat-operators-q45jc" Jan 27 13:33:22 crc kubenswrapper[4745]: I0127 13:33:22.079298 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e07dc175-1fe7-41f8-b56a-a3f0d29c2236-catalog-content\") pod \"redhat-operators-q45jc\" (UID: \"e07dc175-1fe7-41f8-b56a-a3f0d29c2236\") " pod="openshift-marketplace/redhat-operators-q45jc" Jan 27 13:33:22 crc kubenswrapper[4745]: I0127 13:33:22.079416 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e07dc175-1fe7-41f8-b56a-a3f0d29c2236-utilities\") pod \"redhat-operators-q45jc\" (UID: \"e07dc175-1fe7-41f8-b56a-a3f0d29c2236\") " pod="openshift-marketplace/redhat-operators-q45jc" Jan 27 13:33:22 crc kubenswrapper[4745]: I0127 13:33:22.102090 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvz7t\" (UniqueName: \"kubernetes.io/projected/e07dc175-1fe7-41f8-b56a-a3f0d29c2236-kube-api-access-nvz7t\") pod \"redhat-operators-q45jc\" (UID: \"e07dc175-1fe7-41f8-b56a-a3f0d29c2236\") " pod="openshift-marketplace/redhat-operators-q45jc" Jan 27 13:33:22 crc kubenswrapper[4745]: I0127 13:33:22.161123 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q45jc" Jan 27 13:33:22 crc kubenswrapper[4745]: I0127 13:33:22.421913 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q45jc"] Jan 27 13:33:22 crc kubenswrapper[4745]: I0127 13:33:22.835588 4745 generic.go:334] "Generic (PLEG): container finished" podID="ee79f6d7-5c70-4724-b3df-551ab392aa8d" containerID="31e68d3374f8fa846f58622e132aab6bcb9ae48b46581f35a6bc1f88e1df0df7" exitCode=0 Jan 27 13:33:22 crc kubenswrapper[4745]: I0127 13:33:22.835646 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdrkb" event={"ID":"ee79f6d7-5c70-4724-b3df-551ab392aa8d","Type":"ContainerDied","Data":"31e68d3374f8fa846f58622e132aab6bcb9ae48b46581f35a6bc1f88e1df0df7"} Jan 27 13:33:22 crc kubenswrapper[4745]: I0127 13:33:22.837347 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q45jc" event={"ID":"e07dc175-1fe7-41f8-b56a-a3f0d29c2236","Type":"ContainerStarted","Data":"9f3087aa706782d936122715f1787b1ab4d000abd47aa1d05dddcc71a715453c"} Jan 27 13:33:23 crc kubenswrapper[4745]: I0127 13:33:23.847393 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdrkb" event={"ID":"ee79f6d7-5c70-4724-b3df-551ab392aa8d","Type":"ContainerStarted","Data":"d124e7a32339364c9843aa325de958aeae117228e635e71d137ecc9a89cd7153"} Jan 27 13:33:23 crc kubenswrapper[4745]: I0127 13:33:23.848707 4745 generic.go:334] "Generic (PLEG): container finished" podID="e07dc175-1fe7-41f8-b56a-a3f0d29c2236" containerID="038b611b1552b86eec189e83bb25b585f464f9794c7cda47946f1fb0ef05e727" exitCode=0 Jan 27 13:33:23 crc kubenswrapper[4745]: I0127 13:33:23.848770 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q45jc" event={"ID":"e07dc175-1fe7-41f8-b56a-a3f0d29c2236","Type":"ContainerDied","Data":"038b611b1552b86eec189e83bb25b585f464f9794c7cda47946f1fb0ef05e727"} Jan 27 13:33:23 crc kubenswrapper[4745]: I0127 13:33:23.852384 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mllvm" event={"ID":"169ac50a-fca6-42ee-b578-3f93629fcd3c","Type":"ContainerStarted","Data":"137bb6227ddd151ffda2e95ed083558d3993050193945922486f636afc94659d"} Jan 27 13:33:23 crc kubenswrapper[4745]: I0127 13:33:23.876074 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gdrkb" podStartSLOduration=3.341688454 podStartE2EDuration="5.876054532s" podCreationTimestamp="2026-01-27 13:33:18 +0000 UTC" firstStartedPulling="2026-01-27 13:33:20.802324578 +0000 UTC m=+4893.607235266" lastFinishedPulling="2026-01-27 13:33:23.336690666 +0000 UTC m=+4896.141601344" observedRunningTime="2026-01-27 13:33:23.871796522 +0000 UTC m=+4896.676707230" watchObservedRunningTime="2026-01-27 13:33:23.876054532 +0000 UTC m=+4896.680965220" Jan 27 13:33:23 crc kubenswrapper[4745]: I0127 13:33:23.918322 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mllvm" podStartSLOduration=2.791138932 podStartE2EDuration="5.918305529s" podCreationTimestamp="2026-01-27 13:33:18 +0000 UTC" firstStartedPulling="2026-01-27 13:33:19.790398018 +0000 UTC m=+4892.595308706" lastFinishedPulling="2026-01-27 13:33:22.917564615 +0000 UTC m=+4895.722475303" observedRunningTime="2026-01-27 13:33:23.912436183 +0000 UTC m=+4896.717346871" watchObservedRunningTime="2026-01-27 13:33:23.918305529 +0000 UTC m=+4896.723216217" Jan 27 13:33:25 crc kubenswrapper[4745]: I0127 13:33:25.868347 4745 generic.go:334] "Generic (PLEG): container finished" podID="e07dc175-1fe7-41f8-b56a-a3f0d29c2236" containerID="5951b79faf8c917aa3a73aa0a40327453bdd7eef325af6170b616c382648471c" exitCode=0 Jan 27 13:33:25 crc kubenswrapper[4745]: I0127 13:33:25.868490 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q45jc" event={"ID":"e07dc175-1fe7-41f8-b56a-a3f0d29c2236","Type":"ContainerDied","Data":"5951b79faf8c917aa3a73aa0a40327453bdd7eef325af6170b616c382648471c"} Jan 27 13:33:26 crc kubenswrapper[4745]: I0127 13:33:26.175137 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4q5vw" Jan 27 13:33:26 crc kubenswrapper[4745]: I0127 13:33:26.879681 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q45jc" event={"ID":"e07dc175-1fe7-41f8-b56a-a3f0d29c2236","Type":"ContainerStarted","Data":"a0bd53c7aa0365dd0bacf72d506d49433502d2eee8391b9c1e1f5f86867cd320"} Jan 27 13:33:26 crc kubenswrapper[4745]: I0127 13:33:26.902535 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q45jc" podStartSLOduration=3.213418877 podStartE2EDuration="5.902513648s" podCreationTimestamp="2026-01-27 13:33:21 +0000 UTC" firstStartedPulling="2026-01-27 13:33:23.850071466 +0000 UTC m=+4896.654982144" lastFinishedPulling="2026-01-27 13:33:26.539166227 +0000 UTC m=+4899.344076915" observedRunningTime="2026-01-27 13:33:26.897694051 +0000 UTC m=+4899.702604749" watchObservedRunningTime="2026-01-27 13:33:26.902513648 +0000 UTC m=+4899.707424326" Jan 27 13:33:28 crc kubenswrapper[4745]: I0127 13:33:28.971603 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mllvm" Jan 27 13:33:28 crc kubenswrapper[4745]: I0127 13:33:28.972054 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mllvm" Jan 27 13:33:29 crc kubenswrapper[4745]: I0127 13:33:29.012112 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mllvm" Jan 27 13:33:29 crc kubenswrapper[4745]: I0127 13:33:29.155214 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gdrkb" Jan 27 13:33:29 crc kubenswrapper[4745]: I0127 13:33:29.155282 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gdrkb" Jan 27 13:33:29 crc kubenswrapper[4745]: I0127 13:33:29.193421 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gdrkb" Jan 27 13:33:29 crc kubenswrapper[4745]: I0127 13:33:29.885442 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4q5vw"] Jan 27 13:33:30 crc kubenswrapper[4745]: I0127 13:33:30.051131 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gdrkb" Jan 27 13:33:30 crc kubenswrapper[4745]: I0127 13:33:30.051209 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mllvm" Jan 27 13:33:30 crc kubenswrapper[4745]: I0127 13:33:30.074110 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:33:30 crc kubenswrapper[4745]: E0127 13:33:30.074344 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:33:30 crc kubenswrapper[4745]: I0127 13:33:30.238032 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nj9xw"] Jan 27 13:33:30 crc kubenswrapper[4745]: I0127 13:33:30.238436 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nj9xw" podUID="6f1cace8-3efb-4577-864e-6eb4c1ff4b6e" containerName="registry-server" containerID="cri-o://96e853dec5cb3a7138524330f9eace54917fa746b6e21dc0774bc84cdf22cfec" gracePeriod=2 Jan 27 13:33:31 crc kubenswrapper[4745]: I0127 13:33:31.966653 4745 generic.go:334] "Generic (PLEG): container finished" podID="6f1cace8-3efb-4577-864e-6eb4c1ff4b6e" containerID="96e853dec5cb3a7138524330f9eace54917fa746b6e21dc0774bc84cdf22cfec" exitCode=0 Jan 27 13:33:31 crc kubenswrapper[4745]: I0127 13:33:31.966857 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nj9xw" event={"ID":"6f1cace8-3efb-4577-864e-6eb4c1ff4b6e","Type":"ContainerDied","Data":"96e853dec5cb3a7138524330f9eace54917fa746b6e21dc0774bc84cdf22cfec"} Jan 27 13:33:32 crc kubenswrapper[4745]: I0127 13:33:32.162153 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q45jc" Jan 27 13:33:32 crc kubenswrapper[4745]: I0127 13:33:32.165116 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q45jc" Jan 27 13:33:32 crc kubenswrapper[4745]: I0127 13:33:32.301323 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nj9xw" Jan 27 13:33:32 crc kubenswrapper[4745]: I0127 13:33:32.485254 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnp8r\" (UniqueName: \"kubernetes.io/projected/6f1cace8-3efb-4577-864e-6eb4c1ff4b6e-kube-api-access-nnp8r\") pod \"6f1cace8-3efb-4577-864e-6eb4c1ff4b6e\" (UID: \"6f1cace8-3efb-4577-864e-6eb4c1ff4b6e\") " Jan 27 13:33:32 crc kubenswrapper[4745]: I0127 13:33:32.485552 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f1cace8-3efb-4577-864e-6eb4c1ff4b6e-utilities\") pod \"6f1cace8-3efb-4577-864e-6eb4c1ff4b6e\" (UID: \"6f1cace8-3efb-4577-864e-6eb4c1ff4b6e\") " Jan 27 13:33:32 crc kubenswrapper[4745]: I0127 13:33:32.485781 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f1cace8-3efb-4577-864e-6eb4c1ff4b6e-catalog-content\") pod \"6f1cace8-3efb-4577-864e-6eb4c1ff4b6e\" (UID: \"6f1cace8-3efb-4577-864e-6eb4c1ff4b6e\") " Jan 27 13:33:32 crc kubenswrapper[4745]: I0127 13:33:32.486876 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f1cace8-3efb-4577-864e-6eb4c1ff4b6e-utilities" (OuterVolumeSpecName: "utilities") pod "6f1cace8-3efb-4577-864e-6eb4c1ff4b6e" (UID: "6f1cace8-3efb-4577-864e-6eb4c1ff4b6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:33:32 crc kubenswrapper[4745]: I0127 13:33:32.502108 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f1cace8-3efb-4577-864e-6eb4c1ff4b6e-kube-api-access-nnp8r" (OuterVolumeSpecName: "kube-api-access-nnp8r") pod "6f1cace8-3efb-4577-864e-6eb4c1ff4b6e" (UID: "6f1cace8-3efb-4577-864e-6eb4c1ff4b6e"). InnerVolumeSpecName "kube-api-access-nnp8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:33:32 crc kubenswrapper[4745]: I0127 13:33:32.544306 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f1cace8-3efb-4577-864e-6eb4c1ff4b6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f1cace8-3efb-4577-864e-6eb4c1ff4b6e" (UID: "6f1cace8-3efb-4577-864e-6eb4c1ff4b6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:33:32 crc kubenswrapper[4745]: I0127 13:33:32.587435 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f1cace8-3efb-4577-864e-6eb4c1ff4b6e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:33:32 crc kubenswrapper[4745]: I0127 13:33:32.587503 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f1cace8-3efb-4577-864e-6eb4c1ff4b6e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:33:32 crc kubenswrapper[4745]: I0127 13:33:32.587522 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnp8r\" (UniqueName: \"kubernetes.io/projected/6f1cace8-3efb-4577-864e-6eb4c1ff4b6e-kube-api-access-nnp8r\") on node \"crc\" DevicePath \"\"" Jan 27 13:33:32 crc kubenswrapper[4745]: I0127 13:33:32.617284 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mllvm"] Jan 27 13:33:32 crc kubenswrapper[4745]: I0127 13:33:32.617549 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mllvm" podUID="169ac50a-fca6-42ee-b578-3f93629fcd3c" containerName="registry-server" containerID="cri-o://137bb6227ddd151ffda2e95ed083558d3993050193945922486f636afc94659d" gracePeriod=2 Jan 27 13:33:32 crc kubenswrapper[4745]: I0127 13:33:32.819048 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gdrkb"] Jan 27 13:33:32 crc kubenswrapper[4745]: I0127 13:33:32.819622 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gdrkb" podUID="ee79f6d7-5c70-4724-b3df-551ab392aa8d" containerName="registry-server" containerID="cri-o://d124e7a32339364c9843aa325de958aeae117228e635e71d137ecc9a89cd7153" gracePeriod=2 Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.008098 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nj9xw" event={"ID":"6f1cace8-3efb-4577-864e-6eb4c1ff4b6e","Type":"ContainerDied","Data":"85661e8ee331aaba8140dd5b31ea02a40e6c1a4ab4489f6cc8359c1dea15cafa"} Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.008133 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nj9xw" Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.008437 4745 scope.go:117] "RemoveContainer" containerID="96e853dec5cb3a7138524330f9eace54917fa746b6e21dc0774bc84cdf22cfec" Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.010524 4745 generic.go:334] "Generic (PLEG): container finished" podID="ee79f6d7-5c70-4724-b3df-551ab392aa8d" containerID="d124e7a32339364c9843aa325de958aeae117228e635e71d137ecc9a89cd7153" exitCode=0 Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.010596 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdrkb" event={"ID":"ee79f6d7-5c70-4724-b3df-551ab392aa8d","Type":"ContainerDied","Data":"d124e7a32339364c9843aa325de958aeae117228e635e71d137ecc9a89cd7153"} Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.018965 4745 generic.go:334] "Generic (PLEG): container finished" podID="169ac50a-fca6-42ee-b578-3f93629fcd3c" containerID="137bb6227ddd151ffda2e95ed083558d3993050193945922486f636afc94659d" exitCode=0 Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.021134 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mllvm" event={"ID":"169ac50a-fca6-42ee-b578-3f93629fcd3c","Type":"ContainerDied","Data":"137bb6227ddd151ffda2e95ed083558d3993050193945922486f636afc94659d"} Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.052146 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nj9xw"] Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.059653 4745 scope.go:117] "RemoveContainer" containerID="a0b8a5fe0ab45554c7d201de9fc3ff29552da0300a2da4e5fc8ed8f8312cd824" Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.069969 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nj9xw"] Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.086723 4745 scope.go:117] "RemoveContainer" containerID="296040eeca25458837fd5d302d79dc58cee25991c9a4cc606da4a4e997a19fd6" Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.107072 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mllvm" Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.217626 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q45jc" podUID="e07dc175-1fe7-41f8-b56a-a3f0d29c2236" containerName="registry-server" probeResult="failure" output=< Jan 27 13:33:33 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 27 13:33:33 crc kubenswrapper[4745]: > Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.298599 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/169ac50a-fca6-42ee-b578-3f93629fcd3c-utilities\") pod \"169ac50a-fca6-42ee-b578-3f93629fcd3c\" (UID: \"169ac50a-fca6-42ee-b578-3f93629fcd3c\") " Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.298723 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/169ac50a-fca6-42ee-b578-3f93629fcd3c-catalog-content\") pod \"169ac50a-fca6-42ee-b578-3f93629fcd3c\" (UID: \"169ac50a-fca6-42ee-b578-3f93629fcd3c\") " Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.298789 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqdfn\" (UniqueName: \"kubernetes.io/projected/169ac50a-fca6-42ee-b578-3f93629fcd3c-kube-api-access-lqdfn\") pod \"169ac50a-fca6-42ee-b578-3f93629fcd3c\" (UID: \"169ac50a-fca6-42ee-b578-3f93629fcd3c\") " Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.299320 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/169ac50a-fca6-42ee-b578-3f93629fcd3c-utilities" (OuterVolumeSpecName: "utilities") pod "169ac50a-fca6-42ee-b578-3f93629fcd3c" (UID: "169ac50a-fca6-42ee-b578-3f93629fcd3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.304070 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/169ac50a-fca6-42ee-b578-3f93629fcd3c-kube-api-access-lqdfn" (OuterVolumeSpecName: "kube-api-access-lqdfn") pod "169ac50a-fca6-42ee-b578-3f93629fcd3c" (UID: "169ac50a-fca6-42ee-b578-3f93629fcd3c"). InnerVolumeSpecName "kube-api-access-lqdfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.328537 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/169ac50a-fca6-42ee-b578-3f93629fcd3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "169ac50a-fca6-42ee-b578-3f93629fcd3c" (UID: "169ac50a-fca6-42ee-b578-3f93629fcd3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.343247 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdrkb" Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.400745 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqdfn\" (UniqueName: \"kubernetes.io/projected/169ac50a-fca6-42ee-b578-3f93629fcd3c-kube-api-access-lqdfn\") on node \"crc\" DevicePath \"\"" Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.400793 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/169ac50a-fca6-42ee-b578-3f93629fcd3c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.400821 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/169ac50a-fca6-42ee-b578-3f93629fcd3c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.502115 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee79f6d7-5c70-4724-b3df-551ab392aa8d-catalog-content\") pod \"ee79f6d7-5c70-4724-b3df-551ab392aa8d\" (UID: \"ee79f6d7-5c70-4724-b3df-551ab392aa8d\") " Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.502242 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee79f6d7-5c70-4724-b3df-551ab392aa8d-utilities\") pod \"ee79f6d7-5c70-4724-b3df-551ab392aa8d\" (UID: \"ee79f6d7-5c70-4724-b3df-551ab392aa8d\") " Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.502308 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x76cb\" (UniqueName: \"kubernetes.io/projected/ee79f6d7-5c70-4724-b3df-551ab392aa8d-kube-api-access-x76cb\") pod \"ee79f6d7-5c70-4724-b3df-551ab392aa8d\" (UID: \"ee79f6d7-5c70-4724-b3df-551ab392aa8d\") " Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.505383 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee79f6d7-5c70-4724-b3df-551ab392aa8d-utilities" (OuterVolumeSpecName: "utilities") pod "ee79f6d7-5c70-4724-b3df-551ab392aa8d" (UID: "ee79f6d7-5c70-4724-b3df-551ab392aa8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.515134 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee79f6d7-5c70-4724-b3df-551ab392aa8d-kube-api-access-x76cb" (OuterVolumeSpecName: "kube-api-access-x76cb") pod "ee79f6d7-5c70-4724-b3df-551ab392aa8d" (UID: "ee79f6d7-5c70-4724-b3df-551ab392aa8d"). InnerVolumeSpecName "kube-api-access-x76cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.583036 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee79f6d7-5c70-4724-b3df-551ab392aa8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee79f6d7-5c70-4724-b3df-551ab392aa8d" (UID: "ee79f6d7-5c70-4724-b3df-551ab392aa8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.603527 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee79f6d7-5c70-4724-b3df-551ab392aa8d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.603566 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x76cb\" (UniqueName: \"kubernetes.io/projected/ee79f6d7-5c70-4724-b3df-551ab392aa8d-kube-api-access-x76cb\") on node \"crc\" DevicePath \"\"" Jan 27 13:33:33 crc kubenswrapper[4745]: I0127 13:33:33.603575 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee79f6d7-5c70-4724-b3df-551ab392aa8d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:33:34 crc kubenswrapper[4745]: I0127 13:33:34.027520 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdrkb" event={"ID":"ee79f6d7-5c70-4724-b3df-551ab392aa8d","Type":"ContainerDied","Data":"962a5326154decf4dbf1fc431f4f74bb5ea1c389af3cb0de6953fc8ff4078c7c"} Jan 27 13:33:34 crc kubenswrapper[4745]: I0127 13:33:34.028366 4745 scope.go:117] "RemoveContainer" containerID="d124e7a32339364c9843aa325de958aeae117228e635e71d137ecc9a89cd7153" Jan 27 13:33:34 crc kubenswrapper[4745]: I0127 13:33:34.028532 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdrkb" Jan 27 13:33:34 crc kubenswrapper[4745]: I0127 13:33:34.033608 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mllvm" event={"ID":"169ac50a-fca6-42ee-b578-3f93629fcd3c","Type":"ContainerDied","Data":"74ae425b86e27dfe9ee86825d682c2dbd2137ff1a2133b9cf48601594b7da361"} Jan 27 13:33:34 crc kubenswrapper[4745]: I0127 13:33:34.033732 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mllvm" Jan 27 13:33:34 crc kubenswrapper[4745]: I0127 13:33:34.052955 4745 scope.go:117] "RemoveContainer" containerID="31e68d3374f8fa846f58622e132aab6bcb9ae48b46581f35a6bc1f88e1df0df7" Jan 27 13:33:34 crc kubenswrapper[4745]: I0127 13:33:34.094103 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f1cace8-3efb-4577-864e-6eb4c1ff4b6e" path="/var/lib/kubelet/pods/6f1cace8-3efb-4577-864e-6eb4c1ff4b6e/volumes" Jan 27 13:33:34 crc kubenswrapper[4745]: I0127 13:33:34.094556 4745 scope.go:117] "RemoveContainer" containerID="bc22b56483b24ee888d372f4020db8499dd13601a1e5291e005793b4d631df46" Jan 27 13:33:34 crc kubenswrapper[4745]: I0127 13:33:34.094926 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mllvm"] Jan 27 13:33:34 crc kubenswrapper[4745]: I0127 13:33:34.094958 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mllvm"] Jan 27 13:33:34 crc kubenswrapper[4745]: I0127 13:33:34.116773 4745 scope.go:117] "RemoveContainer" containerID="137bb6227ddd151ffda2e95ed083558d3993050193945922486f636afc94659d" Jan 27 13:33:34 crc kubenswrapper[4745]: I0127 13:33:34.117319 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gdrkb"] Jan 27 13:33:34 crc kubenswrapper[4745]: I0127 13:33:34.123312 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gdrkb"] Jan 27 13:33:34 crc kubenswrapper[4745]: I0127 13:33:34.133406 4745 scope.go:117] "RemoveContainer" containerID="2cf2bfeebc90e7afc16551c9cc6aa8f9718a24afc2db639bd4bc3866db143ee2" Jan 27 13:33:34 crc kubenswrapper[4745]: I0127 13:33:34.152054 4745 scope.go:117] "RemoveContainer" containerID="c49c3d845526d6c6b46f58bff24e6c846e83cd68be104c314ce2e1f7873455d4" Jan 27 13:33:36 crc kubenswrapper[4745]: I0127 13:33:36.083785 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="169ac50a-fca6-42ee-b578-3f93629fcd3c" path="/var/lib/kubelet/pods/169ac50a-fca6-42ee-b578-3f93629fcd3c/volumes" Jan 27 13:33:36 crc kubenswrapper[4745]: I0127 13:33:36.085833 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee79f6d7-5c70-4724-b3df-551ab392aa8d" path="/var/lib/kubelet/pods/ee79f6d7-5c70-4724-b3df-551ab392aa8d/volumes" Jan 27 13:33:42 crc kubenswrapper[4745]: I0127 13:33:42.217329 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q45jc" Jan 27 13:33:42 crc kubenswrapper[4745]: I0127 13:33:42.269172 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q45jc" Jan 27 13:33:42 crc kubenswrapper[4745]: I0127 13:33:42.463713 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q45jc"] Jan 27 13:33:43 crc kubenswrapper[4745]: I0127 13:33:43.075141 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:33:44 crc kubenswrapper[4745]: I0127 13:33:44.116206 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q45jc" podUID="e07dc175-1fe7-41f8-b56a-a3f0d29c2236" containerName="registry-server" containerID="cri-o://a0bd53c7aa0365dd0bacf72d506d49433502d2eee8391b9c1e1f5f86867cd320" gracePeriod=2 Jan 27 13:33:44 crc kubenswrapper[4745]: I0127 13:33:44.117262 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerStarted","Data":"d473f6f6a651dec1f03b46be9efe139e691ffcd45dadbbe3da3f3a50a42c45bb"} Jan 27 13:33:44 crc kubenswrapper[4745]: I0127 13:33:44.571834 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q45jc" Jan 27 13:33:44 crc kubenswrapper[4745]: I0127 13:33:44.769040 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvz7t\" (UniqueName: \"kubernetes.io/projected/e07dc175-1fe7-41f8-b56a-a3f0d29c2236-kube-api-access-nvz7t\") pod \"e07dc175-1fe7-41f8-b56a-a3f0d29c2236\" (UID: \"e07dc175-1fe7-41f8-b56a-a3f0d29c2236\") " Jan 27 13:33:44 crc kubenswrapper[4745]: I0127 13:33:44.769214 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e07dc175-1fe7-41f8-b56a-a3f0d29c2236-catalog-content\") pod \"e07dc175-1fe7-41f8-b56a-a3f0d29c2236\" (UID: \"e07dc175-1fe7-41f8-b56a-a3f0d29c2236\") " Jan 27 13:33:44 crc kubenswrapper[4745]: I0127 13:33:44.769249 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e07dc175-1fe7-41f8-b56a-a3f0d29c2236-utilities\") pod \"e07dc175-1fe7-41f8-b56a-a3f0d29c2236\" (UID: \"e07dc175-1fe7-41f8-b56a-a3f0d29c2236\") " Jan 27 13:33:44 crc kubenswrapper[4745]: I0127 13:33:44.770554 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e07dc175-1fe7-41f8-b56a-a3f0d29c2236-utilities" (OuterVolumeSpecName: "utilities") pod "e07dc175-1fe7-41f8-b56a-a3f0d29c2236" (UID: "e07dc175-1fe7-41f8-b56a-a3f0d29c2236"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:33:44 crc kubenswrapper[4745]: I0127 13:33:44.776912 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e07dc175-1fe7-41f8-b56a-a3f0d29c2236-kube-api-access-nvz7t" (OuterVolumeSpecName: "kube-api-access-nvz7t") pod "e07dc175-1fe7-41f8-b56a-a3f0d29c2236" (UID: "e07dc175-1fe7-41f8-b56a-a3f0d29c2236"). InnerVolumeSpecName "kube-api-access-nvz7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:33:44 crc kubenswrapper[4745]: I0127 13:33:44.870905 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvz7t\" (UniqueName: \"kubernetes.io/projected/e07dc175-1fe7-41f8-b56a-a3f0d29c2236-kube-api-access-nvz7t\") on node \"crc\" DevicePath \"\"" Jan 27 13:33:44 crc kubenswrapper[4745]: I0127 13:33:44.870946 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e07dc175-1fe7-41f8-b56a-a3f0d29c2236-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:33:44 crc kubenswrapper[4745]: I0127 13:33:44.891883 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e07dc175-1fe7-41f8-b56a-a3f0d29c2236-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e07dc175-1fe7-41f8-b56a-a3f0d29c2236" (UID: "e07dc175-1fe7-41f8-b56a-a3f0d29c2236"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:33:44 crc kubenswrapper[4745]: I0127 13:33:44.972994 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e07dc175-1fe7-41f8-b56a-a3f0d29c2236-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:33:45 crc kubenswrapper[4745]: I0127 13:33:45.131794 4745 generic.go:334] "Generic (PLEG): container finished" podID="e07dc175-1fe7-41f8-b56a-a3f0d29c2236" containerID="a0bd53c7aa0365dd0bacf72d506d49433502d2eee8391b9c1e1f5f86867cd320" exitCode=0 Jan 27 13:33:45 crc kubenswrapper[4745]: I0127 13:33:45.131850 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q45jc" event={"ID":"e07dc175-1fe7-41f8-b56a-a3f0d29c2236","Type":"ContainerDied","Data":"a0bd53c7aa0365dd0bacf72d506d49433502d2eee8391b9c1e1f5f86867cd320"} Jan 27 13:33:45 crc kubenswrapper[4745]: I0127 13:33:45.132766 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q45jc" event={"ID":"e07dc175-1fe7-41f8-b56a-a3f0d29c2236","Type":"ContainerDied","Data":"9f3087aa706782d936122715f1787b1ab4d000abd47aa1d05dddcc71a715453c"} Jan 27 13:33:45 crc kubenswrapper[4745]: I0127 13:33:45.131870 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q45jc" Jan 27 13:33:45 crc kubenswrapper[4745]: I0127 13:33:45.132804 4745 scope.go:117] "RemoveContainer" containerID="a0bd53c7aa0365dd0bacf72d506d49433502d2eee8391b9c1e1f5f86867cd320" Jan 27 13:33:45 crc kubenswrapper[4745]: I0127 13:33:45.154953 4745 scope.go:117] "RemoveContainer" containerID="5951b79faf8c917aa3a73aa0a40327453bdd7eef325af6170b616c382648471c" Jan 27 13:33:45 crc kubenswrapper[4745]: I0127 13:33:45.176560 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q45jc"] Jan 27 13:33:45 crc kubenswrapper[4745]: I0127 13:33:45.180955 4745 scope.go:117] "RemoveContainer" containerID="038b611b1552b86eec189e83bb25b585f464f9794c7cda47946f1fb0ef05e727" Jan 27 13:33:45 crc kubenswrapper[4745]: I0127 13:33:45.183972 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q45jc"] Jan 27 13:33:45 crc kubenswrapper[4745]: I0127 13:33:45.222588 4745 scope.go:117] "RemoveContainer" containerID="a0bd53c7aa0365dd0bacf72d506d49433502d2eee8391b9c1e1f5f86867cd320" Jan 27 13:33:45 crc kubenswrapper[4745]: E0127 13:33:45.223444 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0bd53c7aa0365dd0bacf72d506d49433502d2eee8391b9c1e1f5f86867cd320\": container with ID starting with a0bd53c7aa0365dd0bacf72d506d49433502d2eee8391b9c1e1f5f86867cd320 not found: ID does not exist" containerID="a0bd53c7aa0365dd0bacf72d506d49433502d2eee8391b9c1e1f5f86867cd320" Jan 27 13:33:45 crc kubenswrapper[4745]: I0127 13:33:45.223493 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0bd53c7aa0365dd0bacf72d506d49433502d2eee8391b9c1e1f5f86867cd320"} err="failed to get container status \"a0bd53c7aa0365dd0bacf72d506d49433502d2eee8391b9c1e1f5f86867cd320\": rpc error: code = NotFound desc = could not find container \"a0bd53c7aa0365dd0bacf72d506d49433502d2eee8391b9c1e1f5f86867cd320\": container with ID starting with a0bd53c7aa0365dd0bacf72d506d49433502d2eee8391b9c1e1f5f86867cd320 not found: ID does not exist" Jan 27 13:33:45 crc kubenswrapper[4745]: I0127 13:33:45.223524 4745 scope.go:117] "RemoveContainer" containerID="5951b79faf8c917aa3a73aa0a40327453bdd7eef325af6170b616c382648471c" Jan 27 13:33:45 crc kubenswrapper[4745]: E0127 13:33:45.223977 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5951b79faf8c917aa3a73aa0a40327453bdd7eef325af6170b616c382648471c\": container with ID starting with 5951b79faf8c917aa3a73aa0a40327453bdd7eef325af6170b616c382648471c not found: ID does not exist" containerID="5951b79faf8c917aa3a73aa0a40327453bdd7eef325af6170b616c382648471c" Jan 27 13:33:45 crc kubenswrapper[4745]: I0127 13:33:45.224003 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5951b79faf8c917aa3a73aa0a40327453bdd7eef325af6170b616c382648471c"} err="failed to get container status \"5951b79faf8c917aa3a73aa0a40327453bdd7eef325af6170b616c382648471c\": rpc error: code = NotFound desc = could not find container \"5951b79faf8c917aa3a73aa0a40327453bdd7eef325af6170b616c382648471c\": container with ID starting with 5951b79faf8c917aa3a73aa0a40327453bdd7eef325af6170b616c382648471c not found: ID does not exist" Jan 27 13:33:45 crc kubenswrapper[4745]: I0127 13:33:45.224022 4745 scope.go:117] "RemoveContainer" containerID="038b611b1552b86eec189e83bb25b585f464f9794c7cda47946f1fb0ef05e727" Jan 27 13:33:45 crc kubenswrapper[4745]: E0127 13:33:45.224374 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"038b611b1552b86eec189e83bb25b585f464f9794c7cda47946f1fb0ef05e727\": container with ID starting with 038b611b1552b86eec189e83bb25b585f464f9794c7cda47946f1fb0ef05e727 not found: ID does not exist" containerID="038b611b1552b86eec189e83bb25b585f464f9794c7cda47946f1fb0ef05e727" Jan 27 13:33:45 crc kubenswrapper[4745]: I0127 13:33:45.224441 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"038b611b1552b86eec189e83bb25b585f464f9794c7cda47946f1fb0ef05e727"} err="failed to get container status \"038b611b1552b86eec189e83bb25b585f464f9794c7cda47946f1fb0ef05e727\": rpc error: code = NotFound desc = could not find container \"038b611b1552b86eec189e83bb25b585f464f9794c7cda47946f1fb0ef05e727\": container with ID starting with 038b611b1552b86eec189e83bb25b585f464f9794c7cda47946f1fb0ef05e727 not found: ID does not exist" Jan 27 13:33:46 crc kubenswrapper[4745]: I0127 13:33:46.082654 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e07dc175-1fe7-41f8-b56a-a3f0d29c2236" path="/var/lib/kubelet/pods/e07dc175-1fe7-41f8-b56a-a3f0d29c2236/volumes" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.679525 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9wq9m/must-gather-dwwbw"] Jan 27 13:33:57 crc kubenswrapper[4745]: E0127 13:33:57.680196 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f1cace8-3efb-4577-864e-6eb4c1ff4b6e" containerName="registry-server" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.680207 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f1cace8-3efb-4577-864e-6eb4c1ff4b6e" containerName="registry-server" Jan 27 13:33:57 crc kubenswrapper[4745]: E0127 13:33:57.680219 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169ac50a-fca6-42ee-b578-3f93629fcd3c" containerName="extract-utilities" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.680227 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="169ac50a-fca6-42ee-b578-3f93629fcd3c" containerName="extract-utilities" Jan 27 13:33:57 crc kubenswrapper[4745]: E0127 13:33:57.680240 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e07dc175-1fe7-41f8-b56a-a3f0d29c2236" containerName="extract-utilities" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.680246 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e07dc175-1fe7-41f8-b56a-a3f0d29c2236" containerName="extract-utilities" Jan 27 13:33:57 crc kubenswrapper[4745]: E0127 13:33:57.680259 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f1cace8-3efb-4577-864e-6eb4c1ff4b6e" containerName="extract-utilities" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.680265 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f1cace8-3efb-4577-864e-6eb4c1ff4b6e" containerName="extract-utilities" Jan 27 13:33:57 crc kubenswrapper[4745]: E0127 13:33:57.680273 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e07dc175-1fe7-41f8-b56a-a3f0d29c2236" containerName="extract-content" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.680278 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e07dc175-1fe7-41f8-b56a-a3f0d29c2236" containerName="extract-content" Jan 27 13:33:57 crc kubenswrapper[4745]: E0127 13:33:57.680295 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169ac50a-fca6-42ee-b578-3f93629fcd3c" containerName="registry-server" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.680301 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="169ac50a-fca6-42ee-b578-3f93629fcd3c" containerName="registry-server" Jan 27 13:33:57 crc kubenswrapper[4745]: E0127 13:33:57.680311 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169ac50a-fca6-42ee-b578-3f93629fcd3c" containerName="extract-content" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.680317 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="169ac50a-fca6-42ee-b578-3f93629fcd3c" containerName="extract-content" Jan 27 13:33:57 crc kubenswrapper[4745]: E0127 13:33:57.680327 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f1cace8-3efb-4577-864e-6eb4c1ff4b6e" containerName="extract-content" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.680333 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f1cace8-3efb-4577-864e-6eb4c1ff4b6e" containerName="extract-content" Jan 27 13:33:57 crc kubenswrapper[4745]: E0127 13:33:57.680343 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee79f6d7-5c70-4724-b3df-551ab392aa8d" containerName="registry-server" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.680349 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee79f6d7-5c70-4724-b3df-551ab392aa8d" containerName="registry-server" Jan 27 13:33:57 crc kubenswrapper[4745]: E0127 13:33:57.680358 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee79f6d7-5c70-4724-b3df-551ab392aa8d" containerName="extract-utilities" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.680364 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee79f6d7-5c70-4724-b3df-551ab392aa8d" containerName="extract-utilities" Jan 27 13:33:57 crc kubenswrapper[4745]: E0127 13:33:57.680374 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee79f6d7-5c70-4724-b3df-551ab392aa8d" containerName="extract-content" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.680380 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee79f6d7-5c70-4724-b3df-551ab392aa8d" containerName="extract-content" Jan 27 13:33:57 crc kubenswrapper[4745]: E0127 13:33:57.680389 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e07dc175-1fe7-41f8-b56a-a3f0d29c2236" containerName="registry-server" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.680395 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e07dc175-1fe7-41f8-b56a-a3f0d29c2236" containerName="registry-server" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.680510 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="e07dc175-1fe7-41f8-b56a-a3f0d29c2236" containerName="registry-server" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.680518 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee79f6d7-5c70-4724-b3df-551ab392aa8d" containerName="registry-server" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.680528 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f1cace8-3efb-4577-864e-6eb4c1ff4b6e" containerName="registry-server" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.680541 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="169ac50a-fca6-42ee-b578-3f93629fcd3c" containerName="registry-server" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.681235 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9wq9m/must-gather-dwwbw" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.684049 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-9wq9m"/"default-dockercfg-fmrb6" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.684372 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9wq9m"/"kube-root-ca.crt" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.695156 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9wq9m/must-gather-dwwbw"] Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.695826 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9wq9m"/"openshift-service-ca.crt" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.817191 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/879e4c9b-017c-4f48-af00-78e378594425-must-gather-output\") pod \"must-gather-dwwbw\" (UID: \"879e4c9b-017c-4f48-af00-78e378594425\") " pod="openshift-must-gather-9wq9m/must-gather-dwwbw" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.817340 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj6xb\" (UniqueName: \"kubernetes.io/projected/879e4c9b-017c-4f48-af00-78e378594425-kube-api-access-nj6xb\") pod \"must-gather-dwwbw\" (UID: \"879e4c9b-017c-4f48-af00-78e378594425\") " pod="openshift-must-gather-9wq9m/must-gather-dwwbw" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.918614 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj6xb\" (UniqueName: \"kubernetes.io/projected/879e4c9b-017c-4f48-af00-78e378594425-kube-api-access-nj6xb\") pod \"must-gather-dwwbw\" (UID: \"879e4c9b-017c-4f48-af00-78e378594425\") " pod="openshift-must-gather-9wq9m/must-gather-dwwbw" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.918677 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/879e4c9b-017c-4f48-af00-78e378594425-must-gather-output\") pod \"must-gather-dwwbw\" (UID: \"879e4c9b-017c-4f48-af00-78e378594425\") " pod="openshift-must-gather-9wq9m/must-gather-dwwbw" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.919084 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/879e4c9b-017c-4f48-af00-78e378594425-must-gather-output\") pod \"must-gather-dwwbw\" (UID: \"879e4c9b-017c-4f48-af00-78e378594425\") " pod="openshift-must-gather-9wq9m/must-gather-dwwbw" Jan 27 13:33:57 crc kubenswrapper[4745]: I0127 13:33:57.937087 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj6xb\" (UniqueName: \"kubernetes.io/projected/879e4c9b-017c-4f48-af00-78e378594425-kube-api-access-nj6xb\") pod \"must-gather-dwwbw\" (UID: \"879e4c9b-017c-4f48-af00-78e378594425\") " pod="openshift-must-gather-9wq9m/must-gather-dwwbw" Jan 27 13:33:58 crc kubenswrapper[4745]: I0127 13:33:58.006052 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9wq9m/must-gather-dwwbw" Jan 27 13:33:58 crc kubenswrapper[4745]: I0127 13:33:58.431055 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9wq9m/must-gather-dwwbw"] Jan 27 13:33:58 crc kubenswrapper[4745]: W0127 13:33:58.434799 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod879e4c9b_017c_4f48_af00_78e378594425.slice/crio-55851374c7ec2f09f813f9c93dc8644b99fdb403c4acca8de20784628424f8bf WatchSource:0}: Error finding container 55851374c7ec2f09f813f9c93dc8644b99fdb403c4acca8de20784628424f8bf: Status 404 returned error can't find the container with id 55851374c7ec2f09f813f9c93dc8644b99fdb403c4acca8de20784628424f8bf Jan 27 13:33:59 crc kubenswrapper[4745]: I0127 13:33:59.244641 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9wq9m/must-gather-dwwbw" event={"ID":"879e4c9b-017c-4f48-af00-78e378594425","Type":"ContainerStarted","Data":"55851374c7ec2f09f813f9c93dc8644b99fdb403c4acca8de20784628424f8bf"} Jan 27 13:34:06 crc kubenswrapper[4745]: I0127 13:34:06.312756 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9wq9m/must-gather-dwwbw" event={"ID":"879e4c9b-017c-4f48-af00-78e378594425","Type":"ContainerStarted","Data":"eee8a30081703f7cb0529988abf10c5e1326878cc591846bc6752cc02f83361b"} Jan 27 13:34:06 crc kubenswrapper[4745]: I0127 13:34:06.313306 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9wq9m/must-gather-dwwbw" event={"ID":"879e4c9b-017c-4f48-af00-78e378594425","Type":"ContainerStarted","Data":"3ac6aac87ab82541e6442deab81fdcb665a94ecc670b30beeb708da88b05769d"} Jan 27 13:34:06 crc kubenswrapper[4745]: I0127 13:34:06.336289 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9wq9m/must-gather-dwwbw" podStartSLOduration=2.117002133 podStartE2EDuration="9.336266446s" podCreationTimestamp="2026-01-27 13:33:57 +0000 UTC" firstStartedPulling="2026-01-27 13:33:58.436711275 +0000 UTC m=+4931.241621963" lastFinishedPulling="2026-01-27 13:34:05.655975588 +0000 UTC m=+4938.460886276" observedRunningTime="2026-01-27 13:34:06.330721349 +0000 UTC m=+4939.135632057" watchObservedRunningTime="2026-01-27 13:34:06.336266446 +0000 UTC m=+4939.141177134" Jan 27 13:35:14 crc kubenswrapper[4745]: I0127 13:35:14.895733 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt_f6975204-2c25-460d-945c-61061b38a981/util/0.log" Jan 27 13:35:15 crc kubenswrapper[4745]: I0127 13:35:15.151682 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt_f6975204-2c25-460d-945c-61061b38a981/pull/0.log" Jan 27 13:35:15 crc kubenswrapper[4745]: I0127 13:35:15.157347 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt_f6975204-2c25-460d-945c-61061b38a981/util/0.log" Jan 27 13:35:15 crc kubenswrapper[4745]: I0127 13:35:15.212005 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt_f6975204-2c25-460d-945c-61061b38a981/pull/0.log" Jan 27 13:35:15 crc kubenswrapper[4745]: I0127 13:35:15.365610 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt_f6975204-2c25-460d-945c-61061b38a981/util/0.log" Jan 27 13:35:15 crc kubenswrapper[4745]: I0127 13:35:15.388596 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt_f6975204-2c25-460d-945c-61061b38a981/extract/0.log" Jan 27 13:35:15 crc kubenswrapper[4745]: I0127 13:35:15.441205 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5dd18b35a12deaf7a0598495fd5b77d2bd43512cb23bddbb28ccb4f62bgsdqt_f6975204-2c25-460d-945c-61061b38a981/pull/0.log" Jan 27 13:35:15 crc kubenswrapper[4745]: I0127 13:35:15.562431 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-65ff799cfd-ptkxh_a545817b-adaf-4966-8472-4a599db84913/manager/0.log" Jan 27 13:35:15 crc kubenswrapper[4745]: I0127 13:35:15.611319 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-kdbm9_7fa2cf33-1cec-4874-8e41-090f3bd0f550/manager/0.log" Jan 27 13:35:15 crc kubenswrapper[4745]: I0127 13:35:15.756614 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-g429m_5dcdc404-8271-4f68-ab3e-b2158e959c6a/manager/0.log" Jan 27 13:35:15 crc kubenswrapper[4745]: I0127 13:35:15.855033 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-78zrk_1268d1f9-be48-4d61-8750-d941d0699718/manager/0.log" Jan 27 13:35:15 crc kubenswrapper[4745]: I0127 13:35:15.939921 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-575ffb885b-7jg6g_6e5cee05-93a0-415b-b0f8-12187035f0e0/manager/0.log" Jan 27 13:35:16 crc kubenswrapper[4745]: I0127 13:35:16.073845 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-kfdwp_f11a179f-d8d9-4a2b-bce5-5319a44efdb0/manager/0.log" Jan 27 13:35:16 crc kubenswrapper[4745]: I0127 13:35:16.165669 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-58k9b_ca2fa659-fb2b-446c-833d-78a0314a8059/manager/0.log" Jan 27 13:35:16 crc kubenswrapper[4745]: I0127 13:35:16.287196 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-9w75r_8b457f32-c7cd-4113-b4a7-d4e06bc578d3/manager/0.log" Jan 27 13:35:16 crc kubenswrapper[4745]: I0127 13:35:16.446087 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-dmd65_cc8f3584-bf19-41e4-837a-13afabf31909/manager/0.log" Jan 27 13:35:16 crc kubenswrapper[4745]: I0127 13:35:16.506362 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-hvr5f_837948f6-a7b7-4895-bc90-c87cce695f25/manager/0.log" Jan 27 13:35:16 crc kubenswrapper[4745]: I0127 13:35:16.656089 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-x6r29_c1aa3726-fa0d-487f-b9c4-813b0a72924c/manager/0.log" Jan 27 13:35:16 crc kubenswrapper[4745]: I0127 13:35:16.740311 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-mr876_077960c7-14c3-4cc0-8760-681a5e59dd07/manager/0.log" Jan 27 13:35:16 crc kubenswrapper[4745]: I0127 13:35:16.898641 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-fbd766fb6-57d5j_ec491b6d-0c60-419b-950f-d91af37597a3/manager/0.log" Jan 27 13:35:16 crc kubenswrapper[4745]: I0127 13:35:16.973206 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7875d7675-tzbs9_a2316a86-a910-42cd-810f-390a7c26e2e9/manager/0.log" Jan 27 13:35:17 crc kubenswrapper[4745]: I0127 13:35:17.141716 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854kk4f2_4fe12909-b3c6-43a8-8c28-1e2e6dd7958f/manager/0.log" Jan 27 13:35:17 crc kubenswrapper[4745]: I0127 13:35:17.312029 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7bc74c4864-pgst6_9a62f602-d717-4a1f-996d-57fa02fbc829/operator/0.log" Jan 27 13:35:17 crc kubenswrapper[4745]: I0127 13:35:17.412764 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-96bd7847-d5vm4_e189eaea-4a00-43c3-b92e-36d10aa9b6d1/manager/0.log" Jan 27 13:35:17 crc kubenswrapper[4745]: I0127 13:35:17.526237 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-g74wc_f9c63e1a-3bc5-4367-8fba-b4c574ba5592/registry-server/0.log" Jan 27 13:35:17 crc kubenswrapper[4745]: I0127 13:35:17.631526 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-8h6mr_3339982a-d5be-4486-b767-127e2873d450/manager/0.log" Jan 27 13:35:17 crc kubenswrapper[4745]: I0127 13:35:17.956216 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-qwl6n_95ef1084-25bf-4a8c-b758-f3fd81957d2b/manager/0.log" Jan 27 13:35:18 crc kubenswrapper[4745]: I0127 13:35:18.104961 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-xhm9d_689ac5a4-566b-41df-9c90-d6f7734a2d79/operator/0.log" Jan 27 13:35:18 crc kubenswrapper[4745]: I0127 13:35:18.197943 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-hg5pv_49e5ed64-890e-430d-a177-df3309fb625c/manager/0.log" Jan 27 13:35:18 crc kubenswrapper[4745]: I0127 13:35:18.307009 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-zbnj5_2ad946ad-ed35-48d1-96c2-5d5dd65eb01c/manager/0.log" Jan 27 13:35:18 crc kubenswrapper[4745]: I0127 13:35:18.469914 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-kdgj4_3d4083db-d2df-46b7-8e81-c7dddecc8d21/manager/0.log" Jan 27 13:35:18 crc kubenswrapper[4745]: I0127 13:35:18.566946 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-d6b8bcbc9-fx8bq_b78df1ec-2307-490b-bf7a-4729381c9b9e/manager/0.log" Jan 27 13:35:37 crc kubenswrapper[4745]: I0127 13:35:37.829541 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-bgpwd_ac22d21e-ce2f-4e46-8b65-e6c84480b954/control-plane-machine-set-operator/0.log" Jan 27 13:35:38 crc kubenswrapper[4745]: I0127 13:35:38.023404 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4hbbw_f880472d-b13f-4b62-946f-3d74aafe5743/kube-rbac-proxy/0.log" Jan 27 13:35:38 crc kubenswrapper[4745]: I0127 13:35:38.109315 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4hbbw_f880472d-b13f-4b62-946f-3d74aafe5743/machine-api-operator/0.log" Jan 27 13:35:50 crc kubenswrapper[4745]: I0127 13:35:50.076458 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-286zr_29719123-511c-4ab3-80e0-956a42bbce47/cert-manager-controller/0.log" Jan 27 13:35:50 crc kubenswrapper[4745]: I0127 13:35:50.321537 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-g7rqb_991f26a3-5089-44a9-99e5-b3690b308b23/cert-manager-cainjector/0.log" Jan 27 13:35:50 crc kubenswrapper[4745]: I0127 13:35:50.379938 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-jr2pj_a95c82a2-1ac8-49c2-a42d-9c597f532783/cert-manager-webhook/0.log" Jan 27 13:36:03 crc kubenswrapper[4745]: I0127 13:36:03.726855 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-zc5sv_7b7bf861-ac3a-4232-9783-6b7662b6c69b/nmstate-console-plugin/0.log" Jan 27 13:36:03 crc kubenswrapper[4745]: I0127 13:36:03.906309 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5bm8w_43ca3915-5425-4595-84b8-dd3c7fc696f3/nmstate-handler/0.log" Jan 27 13:36:03 crc kubenswrapper[4745]: I0127 13:36:03.948883 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-dk5k8_29a40d2d-f958-4b3a-ac04-0c817c5aa6ad/kube-rbac-proxy/0.log" Jan 27 13:36:03 crc kubenswrapper[4745]: I0127 13:36:03.997517 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-dk5k8_29a40d2d-f958-4b3a-ac04-0c817c5aa6ad/nmstate-metrics/0.log" Jan 27 13:36:04 crc kubenswrapper[4745]: I0127 13:36:04.134133 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-gwj5j_c9455dbe-15f6-4d1b-ad15-2d5108ded02e/nmstate-operator/0.log" Jan 27 13:36:04 crc kubenswrapper[4745]: I0127 13:36:04.216890 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-5gvmk_6864b9ac-a4d6-46c5-b994-9710da668093/nmstate-webhook/0.log" Jan 27 13:36:05 crc kubenswrapper[4745]: I0127 13:36:05.967245 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:36:05 crc kubenswrapper[4745]: I0127 13:36:05.967298 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:36:18 crc kubenswrapper[4745]: I0127 13:36:18.077318 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-lvn25_34e4a875-3a3e-43ea-9092-887c194579c5/prometheus-operator/0.log" Jan 27 13:36:18 crc kubenswrapper[4745]: I0127 13:36:18.310184 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-84d7487956-nr9lp_a76fb56b-8fc4-48d9-a356-b1e369938f0f/prometheus-operator-admission-webhook/0.log" Jan 27 13:36:18 crc kubenswrapper[4745]: I0127 13:36:18.396756 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-84d7487956-qb7l2_cc6dbeab-7b0b-4924-a7d9-b1a27b0740cd/prometheus-operator-admission-webhook/0.log" Jan 27 13:36:18 crc kubenswrapper[4745]: I0127 13:36:18.493119 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-78f4x_2fc8aa52-b047-4344-b175-e5b58f406459/operator/0.log" Jan 27 13:36:18 crc kubenswrapper[4745]: I0127 13:36:18.633795 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-wzghm_7d8f40f3-b76c-4b87-8b96-dbb564ce0b8d/perses-operator/0.log" Jan 27 13:36:33 crc kubenswrapper[4745]: I0127 13:36:33.069138 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-djdl6_ecb4dc8b-c615-4fe2-819f-4c799f639d3f/kube-rbac-proxy/0.log" Jan 27 13:36:33 crc kubenswrapper[4745]: I0127 13:36:33.162680 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-djdl6_ecb4dc8b-c615-4fe2-819f-4c799f639d3f/controller/0.log" Jan 27 13:36:33 crc kubenswrapper[4745]: I0127 13:36:33.273707 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6tfdz_9a95157d-c182-4ccc-a603-e314f81ac762/cp-frr-files/0.log" Jan 27 13:36:33 crc kubenswrapper[4745]: I0127 13:36:33.454205 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6tfdz_9a95157d-c182-4ccc-a603-e314f81ac762/cp-frr-files/0.log" Jan 27 13:36:33 crc kubenswrapper[4745]: I0127 13:36:33.503173 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6tfdz_9a95157d-c182-4ccc-a603-e314f81ac762/cp-reloader/0.log" Jan 27 13:36:33 crc kubenswrapper[4745]: I0127 13:36:33.511409 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6tfdz_9a95157d-c182-4ccc-a603-e314f81ac762/cp-metrics/0.log" Jan 27 13:36:33 crc kubenswrapper[4745]: I0127 13:36:33.542336 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6tfdz_9a95157d-c182-4ccc-a603-e314f81ac762/cp-reloader/0.log" Jan 27 13:36:33 crc kubenswrapper[4745]: I0127 13:36:33.665901 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6tfdz_9a95157d-c182-4ccc-a603-e314f81ac762/cp-frr-files/0.log" Jan 27 13:36:33 crc kubenswrapper[4745]: I0127 13:36:33.711221 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6tfdz_9a95157d-c182-4ccc-a603-e314f81ac762/cp-reloader/0.log" Jan 27 13:36:33 crc kubenswrapper[4745]: I0127 13:36:33.742555 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6tfdz_9a95157d-c182-4ccc-a603-e314f81ac762/cp-metrics/0.log" Jan 27 13:36:33 crc kubenswrapper[4745]: I0127 13:36:33.757381 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6tfdz_9a95157d-c182-4ccc-a603-e314f81ac762/cp-metrics/0.log" Jan 27 13:36:33 crc kubenswrapper[4745]: I0127 13:36:33.942282 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6tfdz_9a95157d-c182-4ccc-a603-e314f81ac762/cp-metrics/0.log" Jan 27 13:36:33 crc kubenswrapper[4745]: I0127 13:36:33.947367 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6tfdz_9a95157d-c182-4ccc-a603-e314f81ac762/cp-reloader/0.log" Jan 27 13:36:33 crc kubenswrapper[4745]: I0127 13:36:33.953882 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6tfdz_9a95157d-c182-4ccc-a603-e314f81ac762/cp-frr-files/0.log" Jan 27 13:36:33 crc kubenswrapper[4745]: I0127 13:36:33.990907 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6tfdz_9a95157d-c182-4ccc-a603-e314f81ac762/controller/0.log" Jan 27 13:36:34 crc kubenswrapper[4745]: I0127 13:36:34.167853 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6tfdz_9a95157d-c182-4ccc-a603-e314f81ac762/frr-metrics/0.log" Jan 27 13:36:34 crc kubenswrapper[4745]: I0127 13:36:34.189838 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6tfdz_9a95157d-c182-4ccc-a603-e314f81ac762/kube-rbac-proxy/0.log" Jan 27 13:36:34 crc kubenswrapper[4745]: I0127 13:36:34.242690 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6tfdz_9a95157d-c182-4ccc-a603-e314f81ac762/kube-rbac-proxy-frr/0.log" Jan 27 13:36:34 crc kubenswrapper[4745]: I0127 13:36:34.372097 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6tfdz_9a95157d-c182-4ccc-a603-e314f81ac762/reloader/0.log" Jan 27 13:36:34 crc kubenswrapper[4745]: I0127 13:36:34.477662 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6tfdz_9a95157d-c182-4ccc-a603-e314f81ac762/frr/0.log" Jan 27 13:36:34 crc kubenswrapper[4745]: I0127 13:36:34.496595 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-q97lc_fa5b5f13-a8ba-490e-97e4-3383c24a13c4/frr-k8s-webhook-server/0.log" Jan 27 13:36:34 crc kubenswrapper[4745]: I0127 13:36:34.685407 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69c98499d8-74brb_f1273171-d32f-4231-85d4-9c949800ca10/manager/0.log" Jan 27 13:36:34 crc kubenswrapper[4745]: I0127 13:36:34.691399 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-66f8559b6f-b4zgv_340aa282-d1b7-4386-a768-63ee67934411/webhook-server/0.log" Jan 27 13:36:34 crc kubenswrapper[4745]: I0127 13:36:34.866891 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nslkl_802102ed-a580-4495-9855-d86f54160441/kube-rbac-proxy/0.log" Jan 27 13:36:35 crc kubenswrapper[4745]: I0127 13:36:35.043014 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nslkl_802102ed-a580-4495-9855-d86f54160441/speaker/0.log" Jan 27 13:36:35 crc kubenswrapper[4745]: I0127 13:36:35.967365 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:36:35 crc kubenswrapper[4745]: I0127 13:36:35.967713 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:36:48 crc kubenswrapper[4745]: I0127 13:36:48.205978 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb_45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20/util/0.log" Jan 27 13:36:48 crc kubenswrapper[4745]: I0127 13:36:48.401268 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb_45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20/pull/0.log" Jan 27 13:36:48 crc kubenswrapper[4745]: I0127 13:36:48.410323 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb_45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20/pull/0.log" Jan 27 13:36:48 crc kubenswrapper[4745]: I0127 13:36:48.458209 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb_45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20/util/0.log" Jan 27 13:36:48 crc kubenswrapper[4745]: I0127 13:36:48.590787 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb_45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20/pull/0.log" Jan 27 13:36:48 crc kubenswrapper[4745]: I0127 13:36:48.598912 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb_45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20/extract/0.log" Jan 27 13:36:48 crc kubenswrapper[4745]: I0127 13:36:48.602745 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dccpslb_45a8db79-7efb-4b64-bfb3-8cbbb9a6fb20/util/0.log" Jan 27 13:36:48 crc kubenswrapper[4745]: I0127 13:36:48.769384 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl_a8a79568-f1f6-4fda-9a87-6c232bb3b9a6/util/0.log" Jan 27 13:36:48 crc kubenswrapper[4745]: I0127 13:36:48.924345 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl_a8a79568-f1f6-4fda-9a87-6c232bb3b9a6/pull/0.log" Jan 27 13:36:48 crc kubenswrapper[4745]: I0127 13:36:48.934007 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl_a8a79568-f1f6-4fda-9a87-6c232bb3b9a6/pull/0.log" Jan 27 13:36:49 crc kubenswrapper[4745]: I0127 13:36:49.077320 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl_a8a79568-f1f6-4fda-9a87-6c232bb3b9a6/util/0.log" Jan 27 13:36:49 crc kubenswrapper[4745]: I0127 13:36:49.113246 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl_a8a79568-f1f6-4fda-9a87-6c232bb3b9a6/util/0.log" Jan 27 13:36:49 crc kubenswrapper[4745]: I0127 13:36:49.114296 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl_a8a79568-f1f6-4fda-9a87-6c232bb3b9a6/pull/0.log" Jan 27 13:36:49 crc kubenswrapper[4745]: I0127 13:36:49.382035 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk_0b021ec5-0cae-448d-a9da-72a4f4e4ddf7/util/0.log" Jan 27 13:36:49 crc kubenswrapper[4745]: I0127 13:36:49.382345 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h7vnl_a8a79568-f1f6-4fda-9a87-6c232bb3b9a6/extract/0.log" Jan 27 13:36:49 crc kubenswrapper[4745]: I0127 13:36:49.447971 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk_0b021ec5-0cae-448d-a9da-72a4f4e4ddf7/util/0.log" Jan 27 13:36:49 crc kubenswrapper[4745]: I0127 13:36:49.454659 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk_0b021ec5-0cae-448d-a9da-72a4f4e4ddf7/pull/0.log" Jan 27 13:36:49 crc kubenswrapper[4745]: I0127 13:36:49.537750 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk_0b021ec5-0cae-448d-a9da-72a4f4e4ddf7/pull/0.log" Jan 27 13:36:49 crc kubenswrapper[4745]: I0127 13:36:49.716325 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk_0b021ec5-0cae-448d-a9da-72a4f4e4ddf7/extract/0.log" Jan 27 13:36:49 crc kubenswrapper[4745]: I0127 13:36:49.750251 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk_0b021ec5-0cae-448d-a9da-72a4f4e4ddf7/util/0.log" Jan 27 13:36:49 crc kubenswrapper[4745]: I0127 13:36:49.781895 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlwmk_0b021ec5-0cae-448d-a9da-72a4f4e4ddf7/pull/0.log" Jan 27 13:36:49 crc kubenswrapper[4745]: I0127 13:36:49.880154 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4q5vw_f3b1242d-ec83-4442-a05a-d51a3849cdc2/extract-utilities/0.log" Jan 27 13:36:50 crc kubenswrapper[4745]: I0127 13:36:50.087295 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4q5vw_f3b1242d-ec83-4442-a05a-d51a3849cdc2/extract-content/0.log" Jan 27 13:36:50 crc kubenswrapper[4745]: I0127 13:36:50.087460 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4q5vw_f3b1242d-ec83-4442-a05a-d51a3849cdc2/extract-content/0.log" Jan 27 13:36:50 crc kubenswrapper[4745]: I0127 13:36:50.092096 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4q5vw_f3b1242d-ec83-4442-a05a-d51a3849cdc2/extract-utilities/0.log" Jan 27 13:36:50 crc kubenswrapper[4745]: I0127 13:36:50.283063 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4q5vw_f3b1242d-ec83-4442-a05a-d51a3849cdc2/extract-content/0.log" Jan 27 13:36:50 crc kubenswrapper[4745]: I0127 13:36:50.284273 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4q5vw_f3b1242d-ec83-4442-a05a-d51a3849cdc2/extract-utilities/0.log" Jan 27 13:36:50 crc kubenswrapper[4745]: I0127 13:36:50.374624 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4q5vw_f3b1242d-ec83-4442-a05a-d51a3849cdc2/registry-server/0.log" Jan 27 13:36:50 crc kubenswrapper[4745]: I0127 13:36:50.462192 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qmb5t_681c7580-eb96-4022-9795-cd4306094a03/extract-utilities/0.log" Jan 27 13:36:50 crc kubenswrapper[4745]: I0127 13:36:50.632496 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qmb5t_681c7580-eb96-4022-9795-cd4306094a03/extract-utilities/0.log" Jan 27 13:36:50 crc kubenswrapper[4745]: I0127 13:36:50.632528 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qmb5t_681c7580-eb96-4022-9795-cd4306094a03/extract-content/0.log" Jan 27 13:36:50 crc kubenswrapper[4745]: I0127 13:36:50.653959 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qmb5t_681c7580-eb96-4022-9795-cd4306094a03/extract-content/0.log" Jan 27 13:36:50 crc kubenswrapper[4745]: I0127 13:36:50.796264 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qmb5t_681c7580-eb96-4022-9795-cd4306094a03/extract-utilities/0.log" Jan 27 13:36:50 crc kubenswrapper[4745]: I0127 13:36:50.816132 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qmb5t_681c7580-eb96-4022-9795-cd4306094a03/extract-content/0.log" Jan 27 13:36:51 crc kubenswrapper[4745]: I0127 13:36:51.089398 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-fdwrb_b077012b-6cdc-4a9a-85ec-4d9f0f59dce1/marketplace-operator/0.log" Jan 27 13:36:51 crc kubenswrapper[4745]: I0127 13:36:51.174638 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sp442_502a401d-0f57-4a44-a241-a6150f1e3c48/extract-utilities/0.log" Jan 27 13:36:51 crc kubenswrapper[4745]: I0127 13:36:51.583003 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sp442_502a401d-0f57-4a44-a241-a6150f1e3c48/extract-content/0.log" Jan 27 13:36:51 crc kubenswrapper[4745]: I0127 13:36:51.624558 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qmb5t_681c7580-eb96-4022-9795-cd4306094a03/registry-server/0.log" Jan 27 13:36:51 crc kubenswrapper[4745]: I0127 13:36:51.637109 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sp442_502a401d-0f57-4a44-a241-a6150f1e3c48/extract-utilities/0.log" Jan 27 13:36:51 crc kubenswrapper[4745]: I0127 13:36:51.637445 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sp442_502a401d-0f57-4a44-a241-a6150f1e3c48/extract-content/0.log" Jan 27 13:36:51 crc kubenswrapper[4745]: I0127 13:36:51.788225 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sp442_502a401d-0f57-4a44-a241-a6150f1e3c48/extract-utilities/0.log" Jan 27 13:36:51 crc kubenswrapper[4745]: I0127 13:36:51.788256 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sp442_502a401d-0f57-4a44-a241-a6150f1e3c48/extract-content/0.log" Jan 27 13:36:52 crc kubenswrapper[4745]: I0127 13:36:52.003416 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lhq6b_b208e24e-eb1e-4ad3-bb95-5c6ff4581b25/extract-utilities/0.log" Jan 27 13:36:52 crc kubenswrapper[4745]: I0127 13:36:52.024091 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sp442_502a401d-0f57-4a44-a241-a6150f1e3c48/registry-server/0.log" Jan 27 13:36:52 crc kubenswrapper[4745]: I0127 13:36:52.466785 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lhq6b_b208e24e-eb1e-4ad3-bb95-5c6ff4581b25/extract-content/0.log" Jan 27 13:36:52 crc kubenswrapper[4745]: I0127 13:36:52.487392 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lhq6b_b208e24e-eb1e-4ad3-bb95-5c6ff4581b25/extract-utilities/0.log" Jan 27 13:36:52 crc kubenswrapper[4745]: I0127 13:36:52.496212 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lhq6b_b208e24e-eb1e-4ad3-bb95-5c6ff4581b25/extract-content/0.log" Jan 27 13:36:52 crc kubenswrapper[4745]: I0127 13:36:52.641857 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lhq6b_b208e24e-eb1e-4ad3-bb95-5c6ff4581b25/extract-utilities/0.log" Jan 27 13:36:52 crc kubenswrapper[4745]: I0127 13:36:52.647648 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lhq6b_b208e24e-eb1e-4ad3-bb95-5c6ff4581b25/extract-content/0.log" Jan 27 13:36:53 crc kubenswrapper[4745]: I0127 13:36:53.506891 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lhq6b_b208e24e-eb1e-4ad3-bb95-5c6ff4581b25/registry-server/0.log" Jan 27 13:37:04 crc kubenswrapper[4745]: I0127 13:37:04.496090 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-lvn25_34e4a875-3a3e-43ea-9092-887c194579c5/prometheus-operator/0.log" Jan 27 13:37:04 crc kubenswrapper[4745]: I0127 13:37:04.531183 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-84d7487956-qb7l2_cc6dbeab-7b0b-4924-a7d9-b1a27b0740cd/prometheus-operator-admission-webhook/0.log" Jan 27 13:37:04 crc kubenswrapper[4745]: I0127 13:37:04.541618 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-84d7487956-nr9lp_a76fb56b-8fc4-48d9-a356-b1e369938f0f/prometheus-operator-admission-webhook/0.log" Jan 27 13:37:04 crc kubenswrapper[4745]: I0127 13:37:04.699154 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-78f4x_2fc8aa52-b047-4344-b175-e5b58f406459/operator/0.log" Jan 27 13:37:04 crc kubenswrapper[4745]: I0127 13:37:04.736712 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-wzghm_7d8f40f3-b76c-4b87-8b96-dbb564ce0b8d/perses-operator/0.log" Jan 27 13:37:05 crc kubenswrapper[4745]: I0127 13:37:05.967908 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:37:05 crc kubenswrapper[4745]: I0127 13:37:05.967988 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:37:05 crc kubenswrapper[4745]: I0127 13:37:05.968056 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 13:37:05 crc kubenswrapper[4745]: I0127 13:37:05.968722 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d473f6f6a651dec1f03b46be9efe139e691ffcd45dadbbe3da3f3a50a42c45bb"} pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:37:05 crc kubenswrapper[4745]: I0127 13:37:05.968767 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" containerID="cri-o://d473f6f6a651dec1f03b46be9efe139e691ffcd45dadbbe3da3f3a50a42c45bb" gracePeriod=600 Jan 27 13:37:06 crc kubenswrapper[4745]: I0127 13:37:06.598875 4745 generic.go:334] "Generic (PLEG): container finished" podID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerID="d473f6f6a651dec1f03b46be9efe139e691ffcd45dadbbe3da3f3a50a42c45bb" exitCode=0 Jan 27 13:37:06 crc kubenswrapper[4745]: I0127 13:37:06.598957 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerDied","Data":"d473f6f6a651dec1f03b46be9efe139e691ffcd45dadbbe3da3f3a50a42c45bb"} Jan 27 13:37:06 crc kubenswrapper[4745]: I0127 13:37:06.599453 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerStarted","Data":"35b8d762b4997f19074cc6a70e856a8f24e703035f68dd955a1d7d8254830c26"} Jan 27 13:37:06 crc kubenswrapper[4745]: I0127 13:37:06.599479 4745 scope.go:117] "RemoveContainer" containerID="988ff0e34359c213f0352c8aa37cb3ef93a0f30900580f699d4770a40996e5cb" Jan 27 13:38:06 crc kubenswrapper[4745]: I0127 13:38:06.087045 4745 generic.go:334] "Generic (PLEG): container finished" podID="879e4c9b-017c-4f48-af00-78e378594425" containerID="3ac6aac87ab82541e6442deab81fdcb665a94ecc670b30beeb708da88b05769d" exitCode=0 Jan 27 13:38:06 crc kubenswrapper[4745]: I0127 13:38:06.087123 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9wq9m/must-gather-dwwbw" event={"ID":"879e4c9b-017c-4f48-af00-78e378594425","Type":"ContainerDied","Data":"3ac6aac87ab82541e6442deab81fdcb665a94ecc670b30beeb708da88b05769d"} Jan 27 13:38:06 crc kubenswrapper[4745]: I0127 13:38:06.088103 4745 scope.go:117] "RemoveContainer" containerID="3ac6aac87ab82541e6442deab81fdcb665a94ecc670b30beeb708da88b05769d" Jan 27 13:38:06 crc kubenswrapper[4745]: I0127 13:38:06.361748 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9wq9m_must-gather-dwwbw_879e4c9b-017c-4f48-af00-78e378594425/gather/0.log" Jan 27 13:38:14 crc kubenswrapper[4745]: I0127 13:38:14.119682 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9wq9m/must-gather-dwwbw"] Jan 27 13:38:14 crc kubenswrapper[4745]: I0127 13:38:14.120224 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-9wq9m/must-gather-dwwbw" podUID="879e4c9b-017c-4f48-af00-78e378594425" containerName="copy" containerID="cri-o://eee8a30081703f7cb0529988abf10c5e1326878cc591846bc6752cc02f83361b" gracePeriod=2 Jan 27 13:38:14 crc kubenswrapper[4745]: I0127 13:38:14.127261 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9wq9m/must-gather-dwwbw"] Jan 27 13:38:14 crc kubenswrapper[4745]: I0127 13:38:14.499466 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9wq9m_must-gather-dwwbw_879e4c9b-017c-4f48-af00-78e378594425/copy/0.log" Jan 27 13:38:14 crc kubenswrapper[4745]: I0127 13:38:14.500334 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9wq9m/must-gather-dwwbw" Jan 27 13:38:14 crc kubenswrapper[4745]: I0127 13:38:14.535660 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/879e4c9b-017c-4f48-af00-78e378594425-must-gather-output\") pod \"879e4c9b-017c-4f48-af00-78e378594425\" (UID: \"879e4c9b-017c-4f48-af00-78e378594425\") " Jan 27 13:38:14 crc kubenswrapper[4745]: I0127 13:38:14.535876 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj6xb\" (UniqueName: \"kubernetes.io/projected/879e4c9b-017c-4f48-af00-78e378594425-kube-api-access-nj6xb\") pod \"879e4c9b-017c-4f48-af00-78e378594425\" (UID: \"879e4c9b-017c-4f48-af00-78e378594425\") " Jan 27 13:38:14 crc kubenswrapper[4745]: I0127 13:38:14.541641 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/879e4c9b-017c-4f48-af00-78e378594425-kube-api-access-nj6xb" (OuterVolumeSpecName: "kube-api-access-nj6xb") pod "879e4c9b-017c-4f48-af00-78e378594425" (UID: "879e4c9b-017c-4f48-af00-78e378594425"). InnerVolumeSpecName "kube-api-access-nj6xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:38:14 crc kubenswrapper[4745]: I0127 13:38:14.636995 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj6xb\" (UniqueName: \"kubernetes.io/projected/879e4c9b-017c-4f48-af00-78e378594425-kube-api-access-nj6xb\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:14 crc kubenswrapper[4745]: I0127 13:38:14.662432 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/879e4c9b-017c-4f48-af00-78e378594425-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "879e4c9b-017c-4f48-af00-78e378594425" (UID: "879e4c9b-017c-4f48-af00-78e378594425"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:38:14 crc kubenswrapper[4745]: I0127 13:38:14.738005 4745 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/879e4c9b-017c-4f48-af00-78e378594425-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:15 crc kubenswrapper[4745]: I0127 13:38:15.151632 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9wq9m_must-gather-dwwbw_879e4c9b-017c-4f48-af00-78e378594425/copy/0.log" Jan 27 13:38:15 crc kubenswrapper[4745]: I0127 13:38:15.152910 4745 generic.go:334] "Generic (PLEG): container finished" podID="879e4c9b-017c-4f48-af00-78e378594425" containerID="eee8a30081703f7cb0529988abf10c5e1326878cc591846bc6752cc02f83361b" exitCode=143 Jan 27 13:38:15 crc kubenswrapper[4745]: I0127 13:38:15.152979 4745 scope.go:117] "RemoveContainer" containerID="eee8a30081703f7cb0529988abf10c5e1326878cc591846bc6752cc02f83361b" Jan 27 13:38:15 crc kubenswrapper[4745]: I0127 13:38:15.153091 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9wq9m/must-gather-dwwbw" Jan 27 13:38:15 crc kubenswrapper[4745]: I0127 13:38:15.175647 4745 scope.go:117] "RemoveContainer" containerID="3ac6aac87ab82541e6442deab81fdcb665a94ecc670b30beeb708da88b05769d" Jan 27 13:38:15 crc kubenswrapper[4745]: I0127 13:38:15.238265 4745 scope.go:117] "RemoveContainer" containerID="eee8a30081703f7cb0529988abf10c5e1326878cc591846bc6752cc02f83361b" Jan 27 13:38:15 crc kubenswrapper[4745]: E0127 13:38:15.239254 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eee8a30081703f7cb0529988abf10c5e1326878cc591846bc6752cc02f83361b\": container with ID starting with eee8a30081703f7cb0529988abf10c5e1326878cc591846bc6752cc02f83361b not found: ID does not exist" containerID="eee8a30081703f7cb0529988abf10c5e1326878cc591846bc6752cc02f83361b" Jan 27 13:38:15 crc kubenswrapper[4745]: I0127 13:38:15.239305 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eee8a30081703f7cb0529988abf10c5e1326878cc591846bc6752cc02f83361b"} err="failed to get container status \"eee8a30081703f7cb0529988abf10c5e1326878cc591846bc6752cc02f83361b\": rpc error: code = NotFound desc = could not find container \"eee8a30081703f7cb0529988abf10c5e1326878cc591846bc6752cc02f83361b\": container with ID starting with eee8a30081703f7cb0529988abf10c5e1326878cc591846bc6752cc02f83361b not found: ID does not exist" Jan 27 13:38:15 crc kubenswrapper[4745]: I0127 13:38:15.239338 4745 scope.go:117] "RemoveContainer" containerID="3ac6aac87ab82541e6442deab81fdcb665a94ecc670b30beeb708da88b05769d" Jan 27 13:38:15 crc kubenswrapper[4745]: E0127 13:38:15.239663 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ac6aac87ab82541e6442deab81fdcb665a94ecc670b30beeb708da88b05769d\": container with ID starting with 3ac6aac87ab82541e6442deab81fdcb665a94ecc670b30beeb708da88b05769d not found: ID does not exist" containerID="3ac6aac87ab82541e6442deab81fdcb665a94ecc670b30beeb708da88b05769d" Jan 27 13:38:15 crc kubenswrapper[4745]: I0127 13:38:15.239784 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ac6aac87ab82541e6442deab81fdcb665a94ecc670b30beeb708da88b05769d"} err="failed to get container status \"3ac6aac87ab82541e6442deab81fdcb665a94ecc670b30beeb708da88b05769d\": rpc error: code = NotFound desc = could not find container \"3ac6aac87ab82541e6442deab81fdcb665a94ecc670b30beeb708da88b05769d\": container with ID starting with 3ac6aac87ab82541e6442deab81fdcb665a94ecc670b30beeb708da88b05769d not found: ID does not exist" Jan 27 13:38:16 crc kubenswrapper[4745]: I0127 13:38:16.082639 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="879e4c9b-017c-4f48-af00-78e378594425" path="/var/lib/kubelet/pods/879e4c9b-017c-4f48-af00-78e378594425/volumes" Jan 27 13:39:35 crc kubenswrapper[4745]: I0127 13:39:35.967531 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:39:35 crc kubenswrapper[4745]: I0127 13:39:35.968123 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:40:05 crc kubenswrapper[4745]: I0127 13:40:05.967647 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:40:05 crc kubenswrapper[4745]: I0127 13:40:05.968188 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:40:35 crc kubenswrapper[4745]: I0127 13:40:35.967585 4745 patch_prober.go:28] interesting pod/machine-config-daemon-gfzkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:40:35 crc kubenswrapper[4745]: I0127 13:40:35.968556 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:40:35 crc kubenswrapper[4745]: I0127 13:40:35.968632 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" Jan 27 13:40:35 crc kubenswrapper[4745]: I0127 13:40:35.969628 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"35b8d762b4997f19074cc6a70e856a8f24e703035f68dd955a1d7d8254830c26"} pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:40:35 crc kubenswrapper[4745]: I0127 13:40:35.969715 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerName="machine-config-daemon" containerID="cri-o://35b8d762b4997f19074cc6a70e856a8f24e703035f68dd955a1d7d8254830c26" gracePeriod=600 Jan 27 13:40:36 crc kubenswrapper[4745]: E0127 13:40:36.106726 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:40:36 crc kubenswrapper[4745]: I0127 13:40:36.338175 4745 generic.go:334] "Generic (PLEG): container finished" podID="49a22b36-6ae4-4887-b364-7d1ac21ff625" containerID="35b8d762b4997f19074cc6a70e856a8f24e703035f68dd955a1d7d8254830c26" exitCode=0 Jan 27 13:40:36 crc kubenswrapper[4745]: I0127 13:40:36.338233 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" event={"ID":"49a22b36-6ae4-4887-b364-7d1ac21ff625","Type":"ContainerDied","Data":"35b8d762b4997f19074cc6a70e856a8f24e703035f68dd955a1d7d8254830c26"} Jan 27 13:40:36 crc kubenswrapper[4745]: I0127 13:40:36.338281 4745 scope.go:117] "RemoveContainer" containerID="d473f6f6a651dec1f03b46be9efe139e691ffcd45dadbbe3da3f3a50a42c45bb" Jan 27 13:40:36 crc kubenswrapper[4745]: I0127 13:40:36.338807 4745 scope.go:117] "RemoveContainer" containerID="35b8d762b4997f19074cc6a70e856a8f24e703035f68dd955a1d7d8254830c26" Jan 27 13:40:36 crc kubenswrapper[4745]: E0127 13:40:36.339127 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:40:48 crc kubenswrapper[4745]: I0127 13:40:48.077471 4745 scope.go:117] "RemoveContainer" containerID="35b8d762b4997f19074cc6a70e856a8f24e703035f68dd955a1d7d8254830c26" Jan 27 13:40:48 crc kubenswrapper[4745]: E0127 13:40:48.091435 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:41:02 crc kubenswrapper[4745]: I0127 13:41:02.074297 4745 scope.go:117] "RemoveContainer" containerID="35b8d762b4997f19074cc6a70e856a8f24e703035f68dd955a1d7d8254830c26" Jan 27 13:41:02 crc kubenswrapper[4745]: E0127 13:41:02.075263 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:41:16 crc kubenswrapper[4745]: I0127 13:41:16.073963 4745 scope.go:117] "RemoveContainer" containerID="35b8d762b4997f19074cc6a70e856a8f24e703035f68dd955a1d7d8254830c26" Jan 27 13:41:16 crc kubenswrapper[4745]: E0127 13:41:16.074727 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:41:31 crc kubenswrapper[4745]: I0127 13:41:31.073381 4745 scope.go:117] "RemoveContainer" containerID="35b8d762b4997f19074cc6a70e856a8f24e703035f68dd955a1d7d8254830c26" Jan 27 13:41:31 crc kubenswrapper[4745]: E0127 13:41:31.074130 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:41:46 crc kubenswrapper[4745]: I0127 13:41:46.074606 4745 scope.go:117] "RemoveContainer" containerID="35b8d762b4997f19074cc6a70e856a8f24e703035f68dd955a1d7d8254830c26" Jan 27 13:41:46 crc kubenswrapper[4745]: E0127 13:41:46.075714 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:41:57 crc kubenswrapper[4745]: I0127 13:41:57.073533 4745 scope.go:117] "RemoveContainer" containerID="35b8d762b4997f19074cc6a70e856a8f24e703035f68dd955a1d7d8254830c26" Jan 27 13:41:57 crc kubenswrapper[4745]: E0127 13:41:57.075510 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:42:09 crc kubenswrapper[4745]: I0127 13:42:09.073330 4745 scope.go:117] "RemoveContainer" containerID="35b8d762b4997f19074cc6a70e856a8f24e703035f68dd955a1d7d8254830c26" Jan 27 13:42:09 crc kubenswrapper[4745]: E0127 13:42:09.074064 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:42:23 crc kubenswrapper[4745]: I0127 13:42:23.074092 4745 scope.go:117] "RemoveContainer" containerID="35b8d762b4997f19074cc6a70e856a8f24e703035f68dd955a1d7d8254830c26" Jan 27 13:42:23 crc kubenswrapper[4745]: E0127 13:42:23.074730 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:42:35 crc kubenswrapper[4745]: I0127 13:42:35.073888 4745 scope.go:117] "RemoveContainer" containerID="35b8d762b4997f19074cc6a70e856a8f24e703035f68dd955a1d7d8254830c26" Jan 27 13:42:35 crc kubenswrapper[4745]: E0127 13:42:35.077175 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625" Jan 27 13:42:49 crc kubenswrapper[4745]: I0127 13:42:49.073560 4745 scope.go:117] "RemoveContainer" containerID="35b8d762b4997f19074cc6a70e856a8f24e703035f68dd955a1d7d8254830c26" Jan 27 13:42:49 crc kubenswrapper[4745]: E0127 13:42:49.074371 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfzkp_openshift-machine-config-operator(49a22b36-6ae4-4887-b364-7d1ac21ff625)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfzkp" podUID="49a22b36-6ae4-4887-b364-7d1ac21ff625"